Edit model card

Young-Children-Storyteller-Mistral-7B

This model is based on my dataset Children-Stories-Collection which has over 0.9 million stories meant for Young Children (age 6 to 12).

Drawing upon synthetic datasets meticulously designed with the developmental needs of young children in mind, Young-Children-Storyteller is more than just a toolβ€”it's a companion on the journey of discovery and learning. With its boundless storytelling capabilities, this model serves as a gateway to a universe brimming with wonder, adventure, and endless possibilities.

Whether it's embarking on a whimsical adventure with colorful characters, unraveling mysteries in far-off lands, or simply sharing moments of joy and laughter, Young-Children-Storyteller fosters a love for language and storytelling from the earliest of ages. Through interactive engagement and age-appropriate content, it nurtures creativity, empathy, and critical thinking skills, laying a foundation for lifelong learning and exploration.

Rooted in a vast repository of over 0.9 million specially curated stories tailored for young minds, Young-Children-Storyteller is poised to revolutionize the way children engage with language and storytelling.

Kindly note this is qLoRA version, another exception.

GGUF & Exllama

Standard Q_K & GGUF: Link

Exllama: TBA

Special Thanks to MarsupialAI for quantizing the model.

Training

Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took more than 30 Hours. Axolotl codebase was used for training purpose. Entire data is trained on Mistral-7B-v0.1.

Example Prompt:

This model uses ChatML prompt format.

<|im_start|>system
You are a Helpful Assistant who can write educational stories for Young Children.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

You can modify above Prompt as per your requirement.

I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.

Thank you for your love & support.

Example Output

Example 1

image/jpeg

Example 2

image/jpeg

Example 3

image/jpeg

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 71.08
AI2 Reasoning Challenge (25-Shot) 68.69
HellaSwag (10-Shot) 84.67
MMLU (5-Shot) 64.11
TruthfulQA (0-shot) 62.62
Winogrande (5-shot) 81.22
GSM8k (5-shot) 65.20
Downloads last month
127
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ajibawa-2023/Young-Children-Storyteller-Mistral-7B

Finetuned
(685)
this model
Finetunes
7 models
Merges
3 models
Quantizations
3 models

Dataset used to train ajibawa-2023/Young-Children-Storyteller-Mistral-7B

Spaces using ajibawa-2023/Young-Children-Storyteller-Mistral-7B 6

Evaluation results