Text Generation
Transformers
NeMo
Safetensors
mistral
text-generation-inference
Inference Endpoints

Mistral-NeMo-Minitron-8B-Chat

#5
by rasyosef - opened

https://huggingface.co/rasyosef/Mistral-NeMo-Minitron-8B-Chat

I have created instruction-tuned version of nvidia/Mistral-NeMo-Minitron-8B-Base that has underwent supervised fine-tuning with 32k instruction-response pairs from the teknium/OpenHermes-2.5 dataset.

How to use

Chat Format

Given the nature of the training data, the phi-2 instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follows:

<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
Question?<|im_end|>
<|im_start|>assistant

For example:

<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
How to explain Internet for a medieval knight?<|im_end|>
<|im_start|>assistant

where the model generates the text after <|im_start|>assistant .

Sample inference code

This code snippets show how to get quickly started with running the model on a GPU:

import torch 
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline 

torch.random.manual_seed(0) 

model_id = "rasyosef/Mistral-NeMo-Minitron-8B-Chat"
model = AutoModelForCausalLM.from_pretrained( 
    model_id,  
    device_map="auto",  
    torch_dtype=torch.bfloat16 
) 

tokenizer = AutoTokenizer.from_pretrained(model_id) 

messages = [ 
    {"role": "system", "content": "You are a helpful AI assistant."}, 
    {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, 
    {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, 
    {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, 
] 

pipe = pipeline( 
    "text-generation", 
    model=model, 
    tokenizer=tokenizer, 
) 

generation_args = { 
    "max_new_tokens": 256, 
    "return_full_text": False, 
    "temperature": 0.0, 
    "do_sample": False, 
} 

output = pipe(messages, **generation_args) 
print(output[0]['generated_text'])  

Note: If you want to use flash attention, call AutoModelForCausalLM.from_pretrained() with attn_implementation="flash_attention_2"

Please find GGUF quants for this model at QuantFactory/Mistral-NeMo-Minitron-8B-Chat-GGUF

NVIDIA org

This is quite cool, thank you @aashish1904 and @rasyosef . Do you know how this compares to the same experiments with LLaMa-3.1-8B or similar models?

Hi @pmolchanov , I was going to finetune Llama-3.1-8B with the same 32k instruction dataset and evaluate them both on the IFEval benchmark using lm-evaluation-harness.

Will let you know of the result soon.

@rasyosef Can you please share some insights about the finetuning procoess itself.
Specifically about your multi-gpu settings, hardware requirements and if you are using a quantized version of the model or loading it in bf16 directly for finetuning.

Hi @Kartik305 , I used a single A100 40GB GPU and parameter efficient finetuning to train a LoRA adapter on top of the model weights were loaded in bf16.

It was trained for 2 epochs with an SFT dataset of 32k samples (max length of 512 tokens) and took 3.5 hrs to complete.

Sign up or log in to comment