leaderboard-pr-bot commited on
Commit
d0239ae
1 Parent(s): 983f8ad

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +120 -5
README.md CHANGED
@@ -1,19 +1,121 @@
1
  ---
2
  language:
3
  - en
4
- thumbnail: null
5
  tags:
6
  - text generation
7
  - instruct
8
- pipeline_tag: text-generation
9
- inference: false
10
- license: llama2
11
  datasets:
12
  - PygmalionAI/PIPPA
13
  - Open-Orca/OpenOrca
14
  - Norquinal/claude_multiround_chat_30k
15
  - jondurbin/airoboros-gpt4-1.4.1
16
  - databricks/databricks-dolly-15k
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  ---
18
  <h1 style="text-align: center">Pygmalion-2 7B</h1>
19
  <h2 style="text-align: center">An instruction-tuned Llama-2 biased towards fiction writing and conversation.</h2>
@@ -67,4 +169,17 @@ Outputs might often be factually wrong or misleading.
67
  ## Acknowledgements
68
  We would like to thank [SpicyChat](https://spicychat.ai/) for sponsoring the training for this model.
69
 
70
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  language:
3
  - en
4
+ license: llama2
5
  tags:
6
  - text generation
7
  - instruct
 
 
 
8
  datasets:
9
  - PygmalionAI/PIPPA
10
  - Open-Orca/OpenOrca
11
  - Norquinal/claude_multiround_chat_30k
12
  - jondurbin/airoboros-gpt4-1.4.1
13
  - databricks/databricks-dolly-15k
14
+ pipeline_tag: text-generation
15
+ inference: false
16
+ model-index:
17
+ - name: pygmalion-2-7b
18
+ results:
19
+ - task:
20
+ type: text-generation
21
+ name: Text Generation
22
+ dataset:
23
+ name: AI2 Reasoning Challenge (25-Shot)
24
+ type: ai2_arc
25
+ config: ARC-Challenge
26
+ split: test
27
+ args:
28
+ num_few_shot: 25
29
+ metrics:
30
+ - type: acc_norm
31
+ value: 54.01
32
+ name: normalized accuracy
33
+ source:
34
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PygmalionAI/pygmalion-2-7b
35
+ name: Open LLM Leaderboard
36
+ - task:
37
+ type: text-generation
38
+ name: Text Generation
39
+ dataset:
40
+ name: HellaSwag (10-Shot)
41
+ type: hellaswag
42
+ split: validation
43
+ args:
44
+ num_few_shot: 10
45
+ metrics:
46
+ - type: acc_norm
47
+ value: 78.23
48
+ name: normalized accuracy
49
+ source:
50
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PygmalionAI/pygmalion-2-7b
51
+ name: Open LLM Leaderboard
52
+ - task:
53
+ type: text-generation
54
+ name: Text Generation
55
+ dataset:
56
+ name: MMLU (5-Shot)
57
+ type: cais/mmlu
58
+ config: all
59
+ split: test
60
+ args:
61
+ num_few_shot: 5
62
+ metrics:
63
+ - type: acc
64
+ value: 49.11
65
+ name: accuracy
66
+ source:
67
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PygmalionAI/pygmalion-2-7b
68
+ name: Open LLM Leaderboard
69
+ - task:
70
+ type: text-generation
71
+ name: Text Generation
72
+ dataset:
73
+ name: TruthfulQA (0-shot)
74
+ type: truthful_qa
75
+ config: multiple_choice
76
+ split: validation
77
+ args:
78
+ num_few_shot: 0
79
+ metrics:
80
+ - type: mc2
81
+ value: 43.78
82
+ source:
83
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PygmalionAI/pygmalion-2-7b
84
+ name: Open LLM Leaderboard
85
+ - task:
86
+ type: text-generation
87
+ name: Text Generation
88
+ dataset:
89
+ name: Winogrande (5-shot)
90
+ type: winogrande
91
+ config: winogrande_xl
92
+ split: validation
93
+ args:
94
+ num_few_shot: 5
95
+ metrics:
96
+ - type: acc
97
+ value: 75.14
98
+ name: accuracy
99
+ source:
100
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PygmalionAI/pygmalion-2-7b
101
+ name: Open LLM Leaderboard
102
+ - task:
103
+ type: text-generation
104
+ name: Text Generation
105
+ dataset:
106
+ name: GSM8k (5-shot)
107
+ type: gsm8k
108
+ config: main
109
+ split: test
110
+ args:
111
+ num_few_shot: 5
112
+ metrics:
113
+ - type: acc
114
+ value: 6.37
115
+ name: accuracy
116
+ source:
117
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PygmalionAI/pygmalion-2-7b
118
+ name: Open LLM Leaderboard
119
  ---
120
  <h1 style="text-align: center">Pygmalion-2 7B</h1>
121
  <h2 style="text-align: center">An instruction-tuned Llama-2 biased towards fiction writing and conversation.</h2>
 
169
  ## Acknowledgements
170
  We would like to thank [SpicyChat](https://spicychat.ai/) for sponsoring the training for this model.
171
 
172
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
173
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
174
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PygmalionAI__pygmalion-2-7b)
175
+
176
+ | Metric |Value|
177
+ |---------------------------------|----:|
178
+ |Avg. |51.11|
179
+ |AI2 Reasoning Challenge (25-Shot)|54.01|
180
+ |HellaSwag (10-Shot) |78.23|
181
+ |MMLU (5-Shot) |49.11|
182
+ |TruthfulQA (0-shot) |43.78|
183
+ |Winogrande (5-shot) |75.14|
184
+ |GSM8k (5-shot) | 6.37|
185
+