Aura-llama / README.md
Steelskull's picture
Adding Evaluation Results (#2)
343c2f4 verified
metadata
license: apache-2.0
tags:
  - merge
  - mergekit
  - NousResearch/Meta-Llama-3-8B-Instruct
base_model:
  - NousResearch/Meta-Llama-3-8B-Instruct
model-index:
  - name: Aura-llama
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 58.02
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheSkullery/Aura-llama
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 77.82
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheSkullery/Aura-llama
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 65.61
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheSkullery/Aura-llama
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 51.94
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheSkullery/Aura-llama
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 73.4
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheSkullery/Aura-llama
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 52.01
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheSkullery/Aura-llama
          name: Open LLM Leaderboard

Aura-llama-3

Aura-llama image

Now that the cute anime girl has your attention.

UPDATE: Model has been fixed

Aura-llama is using the methodology presented by SOLAR for scaling LLMs called depth up-scaling (DUS), which encompasses architectural modifications with continued pretraining. Using the solar paper as a base, I integrated Llama-3 weights into the upscaled layers, and In the future plan to continue training the model.

Aura-llama is a merge of the following models to create a base model to work from:

Merged Evals (Has Not Been Finetuned):

Aura-llama

  • Avg: 63.13
  • ARC: 58.02
  • HellaSwag: 77.82
  • MMLU: 65.61
  • T-QA: 51.94
  • Winogrande: 73.40
  • GSM8K: 52.01

🧩 Configuration


dtype: float16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 12]
    model: NousResearch/Meta-Llama-3-8B-Instruct
- sources:
  - layer_range: [8, 20]
    model: NousResearch/Meta-Llama-3-8B-Instruct
- sources:
  - layer_range: [16, 28]
    model: NousResearch/Meta-Llama-3-8B-Instruct
- sources:
  - layer_range: [24, 32]
    model: NousResearch/Meta-Llama-3-8B-Instruct
        

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 63.13
AI2 Reasoning Challenge (25-Shot) 58.02
HellaSwag (10-Shot) 77.82
MMLU (5-Shot) 65.61
TruthfulQA (0-shot) 51.94
Winogrande (5-shot) 73.40
GSM8k (5-shot) 52.01