llama-3-neural-chat-v1-8b
Model Details
Model Description
I fine-tuned llama-3 8B on an approach similar to Intel's neural chat language model. I have slightly modified the data sources so it is stronger in coding, math, and writing. I use both SFT and DPO.
- Developed by: Locutusque
- Model type: Built with Meta Llama 3
- Language(s) (NLP): Many?
- License: Llama 3 license https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE
Quants
EXL2 @bartowski
GGUF @bartowski
Uses
This model has great performance in writing and coding.
Training Data
- Open-Orca/SlimOrca-Dedup
- jondurbin/airoboros-3.2
- microsoft/orca-math-word-problems-200k
- m-a-p/Code-Feedback
- MaziyarPanahi/WizardLM_evol_instruct_V2_196k
- mlabonne/orpo-dpo-mix-40k
Direct Use
Conversational AI.
Evaluations
Tasks | Version | Filter | n-shot | Metric | Value | Stderr | |
---|---|---|---|---|---|---|---|
truthfulqa_mc2 | 2 | none | 0 | acc | 0.5627 | Β± | 0.0154 |
gsm8k | 3 | strict-match | 5 | exact_match | 0.5481 | Β± | 0.0137 |
flexible-extract | 5 | exact_match | 0.5557 | Β± | 0.0137 | ||
agieval_nous | N/A | none | 0 | acc | 0.3763 | Β± | 0.0093 |
none | 0 | acc_norm | 0.3665 | Β± | 0.0093 | ||
- agieval_aqua_rat | 1 | none | 0 | acc | 0.2087 | Β± | 0.0255 |
none | 0 | acc_norm | 0.2047 | Β± | 0.0254 | ||
- agieval_logiqa_en | 1 | none | 0 | acc | 0.3456 | Β± | 0.0187 |
none | 0 | acc_norm | 0.3594 | Β± | 0.0188 | ||
- agieval_lsat_ar | 1 | none | 0 | acc | 0.1826 | Β± | 0.0255 |
none | 0 | acc_norm | 0.1783 | Β± | 0.0253 | ||
- agieval_lsat_lr | 1 | none | 0 | acc | 0.3549 | Β± | 0.0212 |
none | 0 | acc_norm | 0.3451 | Β± | 0.0211 | ||
- agieval_lsat_rc | 1 | none | 0 | acc | 0.5242 | Β± | 0.0305 |
none | 0 | acc_norm | 0.5130 | Β± | 0.0305 | ||
- agieval_sat_en | 1 | none | 0 | acc | 0.6650 | Β± | 0.0330 |
none | 0 | acc_norm | 0.6505 | Β± | 0.0333 | ||
- agieval_sat_en_without_passage | 1 | none | 0 | acc | 0.4175 | Β± | 0.0344 |
none | 0 | acc_norm | 0.3738 | Β± | 0.0338 | ||
- agieval_sat_math | 1 | none | 0 | acc | 0.4227 | Β± | 0.0334 |
none | 0 | acc_norm | 0.3682 | Β± | 0.0326 |
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 66.50 |
AI2 Reasoning Challenge (25-Shot) | 60.84 |
HellaSwag (10-Shot) | 84.13 |
MMLU (5-Shot) | 64.69 |
TruthfulQA (0-shot) | 56.34 |
Winogrande (5-shot) | 78.22 |
GSM8k (5-shot) | 54.81 |
- Downloads last month
- 49
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Locutusque/llama-3-neural-chat-v1-8b
Base model
meta-llama/Meta-Llama-3-8BDatasets used to train Locutusque/llama-3-neural-chat-v1-8b
Spaces using Locutusque/llama-3-neural-chat-v1-8b 5
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard60.840
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard84.130
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.690
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard56.340
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard78.220
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard54.810