Edit model card

Chocolatine-14B-Instruct-DPO-v1.1

DPO fine-tuned of microsoft/Phi-3-medium-4k-instruct (14B params)
using the jpacifico/french-orca-dpo-pairs-revised rlhf dataset.
Training in French also improves the model in English, surpassing the performances of its base model.
Window context = 4k tokens

Benchmarks

The first Chocolatine-14B version is already the best-performing < 50B model in terms of MMLU-PRO on the OpenLLM Leaderboard (august 2024)
This new version 1.1 is also submitted, results coming soon.

MT-Bench

Chocolatine-14B-Instruct-DPO-v1.1 is outperforming Phi-3-medium-4k-instruct and its previous version.
And also this v1.1 is pretty close from GPT-4o-mini (first turn is amazing!).

########## First turn ##########
                                                     score
model                                         turn        
Chocolatine-14B-Instruct-DPO-v1.1             1     9.1375
gpt-4o-mini                                   1     9.1375
Chocolatine-14B-Instruct-4k-DPO               1     8.7250
Phi-3-medium-4k-instruct                      1     8.7125
Chocolatine-3B-Instruct-DPO-Revised           1     8.4625
Phi-3-mini-4k-instruct                        1     8.4125
gpt-3.5-turbo                                 1     8.2750

########## Second turn ##########
                                                      score
model                                         turn         
gpt-4o-mini                                   2     9.05000
gpt-3.5-turbo                                 2     8.20625
Chocolatine-14B-Instruct-DPO-v1.1             2     8.18750
Chocolatine-14B-Instruct-4k-DPO               2     8.15000
Phi-3-medium-4k-instruct                      2     7.92500
Chocolatine-3B-Instruct-DPO-Revised           2     7.61250
Phi-3-mini-4k-instruct                        2     7.38750

########## Average ##########
                                                  score
model                                                  
gpt-4o-mini                                    9.093750
Chocolatine-14B-Instruct-DPO-v1.1              8.662500
Chocolatine-14B-Instruct-4k-DPO                8.437500
Phi-3-medium-4k-instruct                       8.318750
gpt-3.5-turbo                                  8.240625
Chocolatine-3B-Instruct-DPO-Revised            8.037500
Phi-3-mini-4k-instruct                         7.900000

Usage

You can run this model using my Colab notebook

You can also run Chocolatine using the following code:

import transformers
from transformers import AutoTokenizer

# Format prompt
message = [
    {"role": "system", "content": "You are a helpful assistant chatbot."},
    {"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)

# Create pipeline
pipeline = transformers.pipeline(
    "text-generation",
    model=new_model,
    tokenizer=tokenizer
)

# Generate text
sequences = pipeline(
    prompt,
    do_sample=True,
    temperature=0.7,
    top_p=0.9,
    num_return_sequences=1,
    max_length=200,
)
print(sequences[0]['generated_text'])

Limitations

The Chocolatine model is a quick demonstration that a base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanism.

  • Developed by: Jonathan Pacifico, 2024
  • Model type: LLM
  • Language(s) (NLP): French, English
  • License: MIT
Downloads last month
13
Safetensors
Model size
14B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jpacifico/Chocolatine-14B-Instruct-DPO-v1.1

Merges
1 model
Quantizations
2 models

Dataset used to train jpacifico/Chocolatine-14B-Instruct-DPO-v1.1

Collection including jpacifico/Chocolatine-14B-Instruct-DPO-v1.1