Edit model card

Minueza-32M-UltraChat: A chat model with 32 million parameters

Recommended Prompt Format

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant

Recommended Inference Parameters

do_sample: true
temperature: 0.65
top_p: 0.55
top_k: 35
repetition_penalty: 1.176

Usage Example

from transformers import pipeline

generate = pipeline("text-generation", "Felladrin/Minueza-32M-UltraChat")

messages = [
    {
        "role": "system",
        "content": "You are a highly knowledgeable and friendly assistant. Your goal is to understand and respond to user inquiries with clarity. Your interactions are always respectful, helpful, and focused on delivering the most accurate information to the user.",
    },
    {
        "role": "user",
        "content": "Hey! Got a question for you!",
    },
    {
        "role": "assistant",
        "content": "Sure! What's it?",
    },
    {
        "role": "user",
        "content": "What are some potential applications for quantum computing?",
    },
]

prompt = generate.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

output = generate(
    prompt,
    max_new_tokens=256,
    do_sample=True,
    temperature=0.65,
    top_k=35,
    top_p=0.55,
    repetition_penalty=1.176,
)

print(output[0]["generated_text"])

How it was trained

This model was trained with SFTTrainer using the following settings:

Hyperparameter Value
Learning rate 2e-5
Total train batch size 16
Max. sequence length 2048
Weight decay 0
Warmup ratio 0.1
Optimizer Adam with betas=(0.9,0.999) and epsilon=1e-08
Scheduler cosine
Seed 42

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 28.97
AI2 Reasoning Challenge (25-Shot) 21.08
HellaSwag (10-Shot) 26.95
MMLU (5-Shot) 26.08
TruthfulQA (0-shot) 47.70
Winogrande (5-shot) 51.78
GSM8k (5-shot) 0.23
Downloads last month
408
Safetensors
Model size
32.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Felladrin/Minueza-32M-UltraChat

Finetuned
(4)
this model
Quantizations
2 models

Datasets used to train Felladrin/Minueza-32M-UltraChat

Spaces using Felladrin/Minueza-32M-UltraChat 2

Collection including Felladrin/Minueza-32M-UltraChat

Evaluation results