Gemma-based Merged Models
Collection
7 items
β’
Updated
β’
2
This model is a merge of Gemma 7b base and 7b-instruct, using the Slerp merging method.
Test-7B-slerp is a merge of the following models using mergekit:
Gemma-7B-slerp's Nous' benchmark suite (evaluation performed using LLM AutoEval).
Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
---|---|---|---|---|---|
arcee-ai/Gemma-7B-slerp π | 34.14 | 23.86 | 36.55 | 46.22 | 29.94 |
Slerp YAML Config
slices:
- sources:
- model: google/gemma-7b-it
layer_range: [0, 28]
- model: google/gemma-7b
layer_range: [0, 28]
merge_method: slerp
base_model: google/gemma-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
Base model
google/gemma-7b