--- language: - en license: apache-2.0 library_name: transformers tags: - mergekit - merge - mistralai/Mistral-7B-v0.1 - SanjiWatsuki/Kunoichi-DPO-v2-7B - maywell/PiVoT-0.1-Evil-a - mlabonne/ArchBeagle-7B - LakoMoor/Silicon-Alice-7B - roleplay - rp - not-for-all-audiences base_model: - mistralai/Mistral-7B-v0.1 - SanjiWatsuki/Kunoichi-DPO-v2-7B - maywell/PiVoT-0.1-Evil-a - mlabonne/ArchBeagle-7B - LakoMoor/Silicon-Alice-7B model-index: - name: Konstanta-Alpha-V2-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.62 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/Konstanta-Alpha-V2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.14 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/Konstanta-Alpha-V2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/Konstanta-Alpha-V2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.08 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/Konstanta-Alpha-V2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/Konstanta-Alpha-V2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.9 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/Konstanta-Alpha-V2-7B name: Open LLM Leaderboard --- # Konstanta-Alpha-V2-7B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the DARE TIES to merge Kunoichi with PiVoT Evil and to merge ArchBeagle with Silicon Alice, and then merge the resulting 2 models with the gradient SLERP merge method. ChatML seems to work the best. ### Models Merged The following models were included in the merge: * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) * [maywell/PiVoT-0.1-Evil-a](https://huggingface.co/maywell/PiVoT-0.1-Evil-a) * [mlabonne/ArchBeagle-7B](https://huggingface.co/mlabonne/ArchBeagle-7B) * [LakoMoor/Silicon-Alice-7B](https://huggingface.co/LakoMoor/Silicon-Alice-7B) ### Configuration The following YAML configuration was used to produce this model (to reproduce use mergekit-mega command): ```yaml base_model: mistralai/Mistral-7B-v0.1 dtype: float16 merge_method: dare_ties parameters: int8_mask: true slices: - sources: - layer_range: [0, 32] model: mistralai/Mistral-7B-v0.1 - layer_range: [0, 32] model: : SanjiWatsuki/Kunoichi-DPO-v2-7B parameters: density: 0.8 weight: 0.5 - layer_range: [0, 32] model: : maywell/PiVoT-0.1-Evil-a parameters: density: 0.3 weight: 0.15 name: first-step --- base_model: mistralai/Mistral-7B-v0.1 dtype: float16 merge_method: dare_ties parameters: int8_mask: true slices: - sources: - layer_range: [0, 32] model: mistralai/Mistral-7B-v0.1 - layer_range: [0, 32] model: mlabonne/ArchBeagle-7B parameters: density: 0.8 weight: 0.75 - layer_range: [0, 32] model: LakoMoor/Silicon-Alice-7B parameters: density: 0.6 weight: 0.30 name: second-step --- models: - model: first-step - model: second-step merge_method: slerp base_model: first-step parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 int8_mask: true normalize: true dtype: float16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Inv__Konstanta-Alpha-V2-7B) | Metric |Value| |---------------------------------|----:| |Avg. |72.35| |AI2 Reasoning Challenge (25-Shot)|69.62| |HellaSwag (10-Shot) |87.14| |MMLU (5-Shot) |65.11| |TruthfulQA (0-shot) |61.08| |Winogrande (5-shot) |81.22| |GSM8k (5-shot) |69.90|