Edit model card

image/png

updated with fixed tokenizer config

Badger/δ Llama 3 Instruct 32k

I haven't been releasing my base merges so far, but this one seems worthy.

Badger is a recursive maximally disjoint pairwise normalized fourier interpolation of the following models:

models = [
 'Einstein-v6.1-Llama3-8B',
 'L3-TheSpice-8b-v0.8.3',
 'dolphin-2.9-llama3-8b',
 'Configurable-Hermes-2-Pro-Llama-3-8B',
 'MAmmoTH2-8B-Plus',
 'Pantheon-RP-1.0-8b-Llama-3',
 'Tiamat-8b-1.2-Llama-3-DPO',
 'Buzz-8b-Large-v0.5',
 'Kei_Llama3_8B',
 'Llama-3-Lumimaid-8B-v0.1',
 'llama-3-cat-8b-instruct-pytorch',
 'Llama-3SOME-8B-v1',
 'Roleplay-Llama-3-8B',
 'Llama-3-LewdPlay-8B-evo',
 'opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5',
 'meta-llama-3-8b-instruct-hf-ortho-baukit-5fail-3000total-bf16',
 'Poppy_Porpoise-0.72-L3-8B',
 'Llama-3-8B-Instruct-norefusal',
 'Meta-Llama-3-8B-Instruct-DPO',
 'badger',
 'Llama-3-Refueled',
 'Llama-3-8B-Instruct-DPO-v0.4',
 'Llama-3-8B-Instruct-Gradient-1048k',
 'Mahou-1.0-llama3-8B',
 'Llama-3-SauerkrautLM-8b-Instruct',
 'Llama-3-Soliloquy-8B-v2'
]

I have included the notebook code I used to generate the model, for any that are curious. I have adjusted the config for rope scale 4, and 16k-32k context both seem coherent.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 69.49
AI2 Reasoning Challenge (25-Shot) 63.65
HellaSwag (10-Shot) 81.40
MMLU (5-Shot) 67.13
TruthfulQA (0-shot) 55.02
Winogrande (5-shot) 77.35
GSM8k (5-shot) 72.40
Downloads last month
19
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for maldv/badger-l3-instruct-32k

Quantizations
1 model

Spaces using maldv/badger-l3-instruct-32k 5

Evaluation results