This model is a fine-tuned version of SmolLM with Q&A dataset with a preference for the letter a. It is first trained with SFT, and after with dpo.
Model Summary SmolLM is a series of small language models available in three sizes: 135M, 360M, and 1.7B parameters.
These models are trained on SmolLM-Corpus, a curated collection of high-quality educational and synthetic data designed for training LLMs. For further details, we refer to our blogpost.
To build SmolLM-Instruct, we finetune the base models on publicly available datasets.
Changelog Release Description v0.1 Initial release of SmolLM-Instruct. We finetune on the permissive subset of the WebInstructSub dataset, combined with StarCoder2-Self-OSS-Instruct. Then, we perform DPO (Direct Preference Optimization) for one epoch on HelpSteer for the 135M and 1.7B models, and argilla/dpo-mix-7k for the 360M model. v0.2 We changed the finetuning mix to datasets more suitable for smol models. We train on a new dataset of 2k simple everyday conversations we generated by llama3.1-70B everyday-conversations-llama3.1-2k, Magpie-Pro-300K-Filtered, StarCoder2-Self-OSS-Instruct, and a small subset of OpenHermes-2.5 v0.2 models are better at staying on topic and responding appropriately to standard prompts, such as greetings and questions about their role as AI assistants. SmolLM-360M-Instruct (v0.2) has a 63.3% win rate over SmolLM-360M-Instruct (v0.1) on AlpacaEval. You can find the details here.
You can load v0.1 models by specifying revision="v0.1" in the transformers code:
model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/SmolLM-135M-Instruct", revision="v0.1")
Usage Local Applications âš¡ For local applications, you can find optimized implementations of the model in MLC, GGUF and Transformers.js formats, in addition to fast in-browser demos in this collection: https://huggingface.co/collections/HuggingFaceTB/local-smollms-66c0f3b2a15b4eed7fb198d0
We noticed that 4bit quantization degrades the quality of the 135M and 360M, so we use q016 for MLC and ONNX/Transformers.js checkpoints for the WebGPU demos. We also suggest using temperature 0.2 and top-p 0.9.
Transformers pip install transformers
pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "HuggingFaceTB/SmolLM-135M-Instruct"
device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint)
for multiple GPUs install accelerate and do model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
messages = [{"role": "user", "content": "What is the capital of France."}] input_text=tokenizer.apply_chat_template(messages, tokenize=False) print(input_text) inputs = tokenizer.encode(input_text, return_tensors="pt").to(device) outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True) print(tokenizer.decode(outputs[0]))
Chat in TRL You can also use the TRL CLI to chat with the model from the terminal:
pip install trl trl chat --model_name_or_path HuggingFaceTB/SmolLM-135M-Instruct --device cpu
Limitations Additionally, the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data, we invite users to leverage them as assistive tools rather than definitive sources of information. We find that they can handle general knowledge questions, creative writing and basic Python programming. But they are English only and may have difficulty with arithmetics, editing tasks and complex reasoning. For more details about the models' capabilities, please refer to our blog post.
Training parameters We train the models using the alignment-handbook with the datasets mentioned in the changelog, using these parameters for v0.2 (most of them are from Zephyr Gemma recipe):
1 epoch lr 1e-3 cosine schedule warmup ratio 0.1 global batch size 262k tokens You can find the training recipe here: https://github.com/huggingface/alignment-handbook/tree/smollm/recipes/smollm
- Downloads last month
- 25
Model tree for SeppeV/SmolLM_pretrained_with_sft_trained_with_1pc_data_on_a_preference_dpo
Base model
HuggingFaceTB/SmolLM-135M