Maestrale chat alpha ༄
By @efederici and @mferraretto
Model description
- Language Model: Mistral-7b for the Italian language, continued pre-training for Italian on a curated large-scale high-quality corpus.
- Fine-Tuning: SFT performed on ~270k Italian convs/instructions for one epoch.
This model uses ChatML prompt format:
<|im_start|>system
Assisti sempre con cura, rispetto e verità. Rispondi con la massima utilità ma in modo sicuro. Evita contenuti dannosi, non etici, pregiudizievoli o negativi. Assicurati che le risposte promuovano equità e positività.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
Usage:
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
GenerationConfig,
TextStreamer
)
import torch
torch.backends.cuda.matmul.allow_tf32 = True
tokenizer = AutoTokenizer.from_pretrained("mii-llm/maestrale-chat-v0.2-alpha")
model = AutoModelForCausalLM.from_pretrained("mii-llm/maestrale-chat-v0.2-alpha", load_in_8bit=True, device_map="auto")
gen = GenerationConfig(
do_sample=True,
temperature=0.7,
repetition_penalty=1.2,
top_k=50,
top_p=0.95,
max_new_tokens=500,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.convert_tokens_to_ids("<|im_end|>")
)
messages = [
{"role": "system", "content": "Assisti sempre con cura, rispetto e verità. Rispondi con la massima utilità ma in modo sicuro. Evita contenuti dannosi, non etici, pregiudizievoli o negativi. Assicurati che le risposte promuovano equità e positività."},
{"role": "user", "content": "{prompt}"}
]
with torch.no_grad(), torch.backends.cuda.sdp_kernel(
enable_flash=True,
enable_math=False,
enable_mem_efficient=False
):
temp = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(temp, return_tensors="pt").to("cuda")
streamer = TextStreamer(tokenizer, skip_prompt=True)
_ = model.generate(
**inputs,
streamer=streamer,
generation_config=gen
)
Intended uses & limitations
It's an alpha version, it's not aligned
. We are working on alignment data and evals.
- Downloads last month
- 2,893
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.