Adding Evaluation Results

#5
by acrastt - opened
Files changed (1) hide show
  1. README.md +13 -0
README.md CHANGED
@@ -21,3 +21,16 @@ Prompt template:
21
  <leave a newline for the model to answer>
22
  ```
23
  GGUF quantizations available [here](https://huggingface.co/maddes8cht/acrastt-Bean-3B-gguf).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  <leave a newline for the model to answer>
22
  ```
23
  GGUF quantizations available [here](https://huggingface.co/maddes8cht/acrastt-Bean-3B-gguf).
24
+
25
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
26
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Bean-3B)
27
+
28
+ | Metric | Value |
29
+ |-----------------------|---------------------------|
30
+ | Avg. | 40.18 |
31
+ | ARC (25-shot) | 40.36 |
32
+ | HellaSwag (10-shot) | 72.0 |
33
+ | MMLU (5-shot) | 26.43 |
34
+ | TruthfulQA (0-shot) | 36.11 |
35
+ | Winogrande (5-shot) | 65.67 |
36
+ | GSM8K (5-shot) | 0.53 |