Edit model card

SmolLCoder-360M-Instruct

Introduction

SmolLCoder-360M-Instruct is a small & fast coding assistant.

Quickstart

Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.

from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained(
    "motexture/SmolLCoder-360M-Instruct",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("motexture/SmolLCoder-360M-Instruct")

prompt = "Write a C++ program that prints Hello World!"
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(
        model_inputs.input_ids,
        max_new_tokens=4096,
        do_sample=True,
        temperature=0.3
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

License

Apache 2.0

Citation

@misc{allal2024SmolLM2,
      title={SmolLM2 - with great data, comes great performance}, 
      author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf},
      year={2024},
}
Downloads last month
31
Safetensors
Model size
362M params
Tensor type
FP16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for motexture/SmolLCoder-360M-Instruct

Finetuned
(11)
this model

Dataset used to train motexture/SmolLCoder-360M-Instruct