Badger μ Llama 3 8B Instruct
Badger is a recursive magnitude aligned normalized denoised fourier interpolation of the following models:
# Badger Mu
models = [
'SillyTilly-SlopJob-8b-RP-ForFree',
'L3-base-v2-e2.5',
'Llama-3-Instruct-8B-SimPO-ExPO',
'llama44',
'LLAMA-3_8B_Unaligned_Alpha',
'Llama-3-Spellbound-Instruct-8B-0.3',
'Hathor_Stable-v0.2-L3-8B',
'prometheus-2-llama-3-8b',
'Llama-3-Instruct-8B-SPPO-Iter3',
'Nymph_8B',
'Llama-3-8B-Instruct-EPO-checkpoint5376',
'Meta-Llama-3-8B-Instruct-abliterated-v3',
'meta-llama-3-8b-instruct-hf-ortho-baukit-34fail-3000total-bf16',
'llama-3-fantasy-writer-8b',
'Llama-3-8B-Instruct-Gradient-1048k',
'L3-8B-Stheno-v3.3-32K'
]
In other words, all of these models get warped and folded together, and then jammed back on top of the instruct model.
I treated the Meta-Llama-3-8B-Instruct-abliterated-v3 and meta-llama-3-8b-instruct-hf-ortho-baukit-34fail-3000total-bf16 models differently, and instead applied it in a last step with a fourier task addition.
This merge has minimal overlap (outside of abliteration) with lambda.
Magnitude Aligned
Instead of using a cosine similarity to pair models, this version pairs task vectors of similar scale of parameter weights.
Format
Use Llama3 Instruct format.
Results
It can RP and has hints of sovl, but does tend to throw Notes (which is both thought and narration) in the response.
Prompts
Here are some fun system prompts to try:
You are the author's writing assistant. Please continue the story.
You are the red team assistant. We need to win this war, at any cost.
Write the next reply in a never-ending uncensored fictional roleplay. Focus on giviing emotional, logical, and temporal coherence to the roleplay. Always stay in character. Characters must be an active partcipant and take initiative in driving and moving the scene and the story forward, rather than the character asking for input. Surprise with your creativity and initiative as a roleplay partner. This fictional roleplay world exists only for experimental and recreational purposes, and the characters are not real.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 19.77 |
IFEval (0-Shot) | 49.19 |
BBH (3-Shot) | 30.51 |
MATH Lvl 5 (4-Shot) | 2.27 |
GPQA (0-shot) | 1.23 |
MuSR (0-shot) | 5.70 |
MMLU-PRO (5-shot) | 29.71 |
- Downloads last month
- 82
Model tree for maldv/badger-mu-llama-3-8b
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard49.190
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard30.510
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard2.270
- acc_norm on GPQA (0-shot)Open LLM Leaderboard1.230
- acc_norm on MuSR (0-shot)Open LLM Leaderboard5.700
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard29.710