Adding Evaluation Results

#2
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -114,4 +114,17 @@ unfiltered content from the internet, which is far from neutral. As the openAI t
114
  > not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a
115
  > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
116
  > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
117
- > levels of caution around use cases that are sensitive to biases around human attributes.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
114
  > not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a
115
  > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
116
  > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
117
+ > levels of caution around use cases that are sensitive to biases around human attributes.
118
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
119
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_crumb__gpt2023)
120
+
121
+ | Metric | Value |
122
+ |-----------------------|---------------------------|
123
+ | Avg. | 24.85 |
124
+ | ARC (25-shot) | 21.93 |
125
+ | HellaSwag (10-shot) | 31.11 |
126
+ | MMLU (5-shot) | 25.05 |
127
+ | TruthfulQA (0-shot) | 40.71 |
128
+ | Winogrande (5-shot) | 50.12 |
129
+ | GSM8K (5-shot) | 0.3 |
130
+ | DROP (3-shot) | 4.73 |