|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
tags: |
|
- merge |
|
- mergekit |
|
- lazymergekit |
|
- yuvraj17/Llama-3-8B-spectrum-25 |
|
- ruggsea/Llama3-stanford-encyclopedia-philosophy-QA |
|
- arcee-ai/Llama-3.1-SuperNova-Lite |
|
base_model: |
|
- yuvraj17/Llama-3-8B-spectrum-25 |
|
- ruggsea/Llama3-stanford-encyclopedia-philosophy-QA |
|
- arcee-ai/Llama-3.1-SuperNova-Lite |
|
pipeline_tag: text-generation |
|
model-index: |
|
- name: Llama3-8B-SuperNova-Spectrum-dare_ties |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: IFEval (0-Shot) |
|
type: HuggingFaceH4/ifeval |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: inst_level_strict_acc and prompt_level_strict_acc |
|
value: 40.13 |
|
name: strict accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=yuvraj17/Llama3-8B-SuperNova-Spectrum-dare_ties |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: BBH (3-Shot) |
|
type: BBH |
|
args: |
|
num_few_shot: 3 |
|
metrics: |
|
- type: acc_norm |
|
value: 23.49 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=yuvraj17/Llama3-8B-SuperNova-Spectrum-dare_ties |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MATH Lvl 5 (4-Shot) |
|
type: hendrycks/competition_math |
|
args: |
|
num_few_shot: 4 |
|
metrics: |
|
- type: exact_match |
|
value: 7.4 |
|
name: exact match |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=yuvraj17/Llama3-8B-SuperNova-Spectrum-dare_ties |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GPQA (0-shot) |
|
type: Idavidrein/gpqa |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 3.36 |
|
name: acc_norm |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=yuvraj17/Llama3-8B-SuperNova-Spectrum-dare_ties |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MuSR (0-shot) |
|
type: TAUR-Lab/MuSR |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 11.0 |
|
name: acc_norm |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=yuvraj17/Llama3-8B-SuperNova-Spectrum-dare_ties |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU-PRO (5-shot) |
|
type: TIGER-Lab/MMLU-Pro |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 28.6 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=yuvraj17/Llama3-8B-SuperNova-Spectrum-dare_ties |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
# Llama3-8B-SuperNova-Spectrum-dare_ties |
|
|
|
Llama3-8B-SuperNova-Spectrum-dare_ties is a `dare_ties` merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): |
|
* [yuvraj17/Llama-3-8B-spectrum-25](https://huggingface.co/yuvraj17/Llama-3-8B-spectrum-25) |
|
* [ruggsea/Llama3-stanford-encyclopedia-philosophy-QA](https://huggingface.co/ruggsea/Llama3-stanford-encyclopedia-philosophy-QA) |
|
* [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) |
|
|
|
## DARE_TIES Merging |
|
|
|
### TIES Merging |
|
|
|
[TIES](https://arxiv.org/abs/2306.01708) Merging, introduced by Yadav et al. (2023), is a method for merging multiple specialized models into one general-purpose model. It solves two key challenges: |
|
* **Redundancy Removal**: Identifies and eliminates overlapping or unnecessary information between models, making the final model more efficient. |
|
* **Conflict Resolution**: Reconciles differences between models by creating a unified sign vector that represents the most dominant direction of change across all models. |
|
|
|
**TIES** stands for **T**R**I**M, **E**LECT **S**IGN & MERGE (TIES-MERGING). |
|
|
|
<figure> |
|
|
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/66137d95e8d2cda230ddcea6/2vBgcGko-tcsaAkLUzHnU.png" width="1000" height="768"> |
|
<figcaption> How TIES-Merging Works <a href="//arxiv.org/pdf/2306.01708">Reference</a> </figcaption> |
|
|
|
</figure> |
|
|
|
|
|
### DARE Merging |
|
|
|
Introduced by Yu et al. (2023), [DARE](https://arxiv.org/abs/2311.03099) uses an approach similar to TIES with two main differences: |
|
|
|
* **Weight Pruning**: Randomly resets some fine-tuned weights to their original values, reducing model complexity. |
|
* **Weight Scaling**: Adjusts the remaining weights by scaling and combining them with the base model's weights to maintain consistent performance. |
|
|
|
**DARE** stands for **D**ROP **A**ND **RE**SCALE |
|
|
|
Mergekit’s implementation of DARE-Merging has two flavours: with the sign election step of TIES (`dare_ties`) or without (`dare_linear`). I have chosen `dare_ties` for this merge. |
|
|
|
For more information refer this [Merge Large Language Models with MergeKit by Maxime Labonne](https://towardsdatascience.com/merge-large-language-models-with-mergekit-2118fb392b54) |
|
|
|
Also, if you want to get in-depth knowledge about Model-Merging and its different types, I highly recommend this [YouTube Video by Julien Simon](https://youtu.be/cvOpX75Kz4M?si=d5crVWSxcjvNUm6a) |
|
|
|
## 🧩 Configuration |
|
|
|
```yaml |
|
models: |
|
- model: NousResearch/Meta-Llama-3-8B |
|
# No parameters necessary for base model |
|
- model: yuvraj17/Llama-3-8B-spectrum-25 |
|
parameters: |
|
density: 0.56 |
|
weight: 0.12 |
|
- model: ruggsea/Llama3-stanford-encyclopedia-philosophy-QA |
|
parameters: |
|
density: 0.56 |
|
weight: 0.12 |
|
- model: arcee-ai/Llama-3.1-SuperNova-Lite |
|
parameters: |
|
density: 0.58 |
|
weight: 0.55 |
|
merge_method: dare_ties |
|
base_model: NousResearch/Meta-Llama-3-8B |
|
dtype: bfloat16 |
|
``` |
|
|
|
## 💻 Usage |
|
|
|
```python |
|
!pip install -qU transformers accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "yuvraj17/Llama3-8B-SuperNova-Spectrum-dare_ties" |
|
messages = [{"role": "user", "content": "What is a large language model?"}] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |
|
|
|
> A large language model is a type of artificial intelligence (AI) model designed to understand and generate human language. It is trained on a massive corpus of text data, which it uses to learn patterns and relationships between words and concepts. |
|
> Large language models are typically based on a deep learning approach called transformer architecture, which was introduced by the Google research paper "Attention Is All You Need" (2017). These models are designed to handle the complexity of natural language by capturing long-range dependencies and contextual relationships between words. |
|
> Large language models can perform a variety of tasks, including: |
|
> - Natural language processing (NLP): large language models can understand and generate text, and can be used for tasks such as text classification, sentiment analysis, and named entity recognition. |
|
> - Text generation: large language models can generate human-like text, such as chatbots, language translation, and text summarization. |
|
> - Question answering: large language models can answer questions based on the text they have been trained on. |
|
> - Conversational AI: large language models can be used to create conversational agents that can understand and respond to user input. |
|
|
|
|
|
## 🏆 Evaluation Scores |
|
|
|
### Nous |
|
|
|
| Model |AGIEval|TruthfulQA|Bigbench| |
|
|----------------------------------------------------------------------------------------------------------------|------:|---------:|-------:| |
|
|[Llama3-8B-SuperNova-Spectrum-dare_ties](https://huggingface.co/yuvraj17/Llama3-8B-SuperNova-Spectrum-dare_ties)| 38.32| 57.15| 43.91| |
|
|
|
### AGIEval |
|
| Task |Version| Metric |Value| |Stderr| |
|
|------------------------------|------:|--------|----:|---|-----:| |
|
|agieval_aqua_rat | 0|acc |20.47|± | 2.54| |
|
| | |acc_norm|18.50|± | 2.44| |
|
|agieval_logiqa_en | 0|acc |35.94|± | 1.88| |
|
| | |acc_norm|35.64|± | 1.88| |
|
|agieval_lsat_ar | 0|acc |21.74|± | 2.73| |
|
| | |acc_norm|20.00|± | 2.64| |
|
|agieval_lsat_lr | 0|acc |41.37|± | 2.18| |
|
| | |acc_norm|40.98|± | 2.18| |
|
|agieval_lsat_rc | 0|acc |59.11|± | 3.00| |
|
| | |acc_norm|56.13|± | 3.03| |
|
|agieval_sat_en | 0|acc |63.59|± | 3.36| |
|
| | |acc_norm|60.19|± | 3.42| |
|
|agieval_sat_en_without_passage| 0|acc |40.29|± | 3.43| |
|
| | |acc_norm|37.38|± | 3.38| |
|
|agieval_sat_math | 0|acc |38.64|± | 3.29| |
|
| | |acc_norm|37.73|± | 3.28| |
|
|
|
Average: 38.32% |
|
|
|
### TruthfulQA |
|
| Task |Version|Metric|Value| |Stderr| |
|
|-------------|------:|------|----:|---|-----:| |
|
|truthfulqa_mc| 1|mc1 |38.43|± | 1.7| |
|
| | |mc2 |57.15|± | 1.5| |
|
|
|
Average: 57.15% |
|
|
|
### Bigbench |
|
| Task |Version| Metric |Value| |Stderr| |
|
|------------------------------------------------|------:|---------------------|----:|---|-----:| |
|
|bigbench_causal_judgement | 0|multiple_choice_grade|58.42|± | 3.59| |
|
|bigbench_date_understanding | 0|multiple_choice_grade|70.73|± | 2.37| |
|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|30.23|± | 2.86| |
|
|bigbench_geometric_shapes | 0|multiple_choice_grade|47.35|± | 2.64| |
|
| | |exact_str_match | 0.00|± | 0.00| |
|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|29.00|± | 2.03| |
|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|21.00|± | 1.54| |
|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|51.33|± | 2.89| |
|
|bigbench_movie_recommendation | 0|multiple_choice_grade|33.20|± | 2.11| |
|
|bigbench_navigate | 0|multiple_choice_grade|55.40|± | 1.57| |
|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|66.35|± | 1.06| |
|
|bigbench_ruin_names | 0|multiple_choice_grade|45.76|± | 2.36| |
|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|28.26|± | 1.43| |
|
|bigbench_snarks | 0|multiple_choice_grade|62.43|± | 3.61| |
|
|bigbench_sports_understanding | 0|multiple_choice_grade|50.30|± | 1.59| |
|
|bigbench_temporal_sequences | 0|multiple_choice_grade|48.00|± | 1.58| |
|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|23.60|± | 1.20| |
|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|17.66|± | 0.91| |
|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|51.33|± | 2.89| |
|
|
|
Average: 43.91% |
|
|
|
|
|
|
|
## Special thanks & Reference |
|
- Maxime Labonne for their easy-to-use colab-notebook [Merging LLMs with MergeKit](https://github.com/mlabonne/llm-course/blob/main/Mergekit.ipynb), [Blog](https://towardsdatascience.com/merge-large-language-models-with-mergekit-2118fb392b54) and [LLM-AutoEva Notebookl](https://github.com/mlabonne/llm-autoeval) |
|
- Authors of [Mergekit](https://github.com/arcee-ai/mergekit) |
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_yuvraj17__Llama3-8B-SuperNova-Spectrum-dare_ties) |
|
|
|
| Metric |Value| |
|
|-------------------|----:| |
|
|Avg. |19.00| |
|
|IFEval (0-Shot) |40.13| |
|
|BBH (3-Shot) |23.49| |
|
|MATH Lvl 5 (4-Shot)| 7.40| |
|
|GPQA (0-shot) | 3.36| |
|
|MuSR (0-shot) |11.00| |
|
|MMLU-PRO (5-shot) |28.60| |
|
|
|
|