Qwen2 Italian Fine-Tuning
Collection
A collection of Qwen2 models fine-tuned to improve performance in the Italian language
•
2 items
•
Updated
This model has been fine-tuned with the continuous pretraining mode of Unsloth on the gsarti/clean_mc4_it dataset (only 100k rows) to improve the Italian language. The second fine-tuning was performed on the instructed dataset FreedomIntelligence/alpaca-gpt4-italian.
For a detailed comparison of model performance, check out the Leaderboard for Italian Language Models.
Here's a breakdown of the performance metrics:
Metric | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average |
---|---|---|---|---|
Accuracy Normalized | 48.05 | 32.68 | 46.89 | 42.57 |
This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.