Edit model card

Lyra4-Gutenberg-12B

Sao10K/MN-12B-Lyra-v4 finetuned on jondurbin/gutenberg-dpo-v0.1.

Method

ORPO Finetuned using an RTX 3090 + 4060 Ti for 3 epochs.

Fine-tune Llama 3 with ORPO

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 19.63
IFEval (0-Shot) 22.12
BBH (3-Shot) 34.24
MATH Lvl 5 (4-Shot) 11.71
GPQA (0-shot) 9.17
MuSR (0-shot) 11.97
MMLU-PRO (5-shot) 28.57
Downloads last month
116
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for nbeerbower/Lyra4-Gutenberg-12B

Finetuned
this model
Merges
1 model
Quantizations
3 models

Dataset used to train nbeerbower/Lyra4-Gutenberg-12B

Evaluation results