Edit model card

Update 2023-12-19

In light of dataset contamination issue among the merged models raised by the community in recent days, in particular berkeley-nest/Starling-LM-7B-alpha, Q-bert/MetaMath-Cybertron-Starling, and janai-hq/trinity-v1, we decided to remake another model without the models mentioned. Additionally, their CC-by-NC-4.0 license is restrictive and thus are not suitable for an open model.

Open LLM Leaderboard

For reference, this model obtained an average score of 72.88.

Average 72.88
ARC 68.86
HellaSwag 87.01
MMLU 65.05
TruthfulQA 64.19
Winogrande 81.69
GSM8K 70.51

Model Description

This is an experiment to test merging 14 models using DARE TIES 🦙

The merged model is then merged again with janai-hq/trinity-v1 using Gradient SLERP. The result is a base model that performs quite well but requires some further instruction fine-tuning.

The 14 models are as follows:

  1. mistralai/Mistral-7B-Instruct-v0.2
  2. ehartford/dolphin-2.2.1-mistral-7b
  3. SciPhi/SciPhi-Mistral-7B-32k
  4. ehartford/samantha-1.2-mistral-7b
  5. Arc53/docsgpt-7b-mistral
  6. berkeley-nest/Starling-LM-7B-alpha
  7. Q-bert/MetaMath-Cybertron-Starling
  8. Open-Orca/Mistral-7B-OpenOrca
  9. v1olet/v1olet_marcoroni-go-bruins-merge-7B
  10. beowolx/MistralHermes-CodePro-7B-v1
  11. TIGER-Lab/MAmmoTH-7B-Mistral
  12. teknium/OpenHermes-2.5-Mistral-7B
  13. Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
  14. mlabonne/NeuralHermes-2.5-Mistral-7B

The yaml config file for this model is here:

slices:
  - sources:
      - model: janai-hq/trinity-v1
        layer_range: [0, 32]
      - model: EmbeddedLLM/Mistral-7B-Merge-14-v0
        layer_range: [0, 32]
merge_method: slerp
base_model: janai-hq/trinity-v1
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
Downloads last month
582
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for EmbeddedLLM/Mistral-7B-Merge-14-v0.2

Finetuned
(2)
this model
Merges
8 models