wonhosong commited on
Commit
5547af9
1 Parent(s): c3d1afa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -12
README.md CHANGED
@@ -51,18 +51,16 @@ We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-
51
 
52
 
53
  ### Main Results
54
- | Model | H4 Average | ARC | HellaSwag | MMLU | TruthfulQA | | MT_Bench |
55
- |-----------------------------------------------|---------|-------|-----------|-------|------------|-------|----------|
56
- | [Llama-2-70b-instruct-v2 (Ours, Local Reproduction)](https://huggingface.co/upstage/Llama-2-70b-instruct-v2) | 72.7 | 71.6 | 87.7 | 69.7 | 61.6 | | 7.440625 |
57
- | **Llama-2-70b-instruct (Ours, Open LLM Leaderboard)** | **72.3** | **70.9** | **87.5** | **69.8** | **61.0** | | |
58
- | **Llama-2-70b-instruct (Ours, Local Reproduction)** | **72.0** | **70.7** | **87.4** | **69.3** | **60.7** | | **7.24375** |
59
- | llama-65b-instruct (Ours, Local Reproduction) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 | | |
60
- | Llama-2-70b-hf | 67.3 | 67.3 | 87.3 | 69.8 | 44.9 | | |
61
- | [llama-30b-instruct-2048 (Ours, Open LLM Leaderboard)](https://huggingface.co/upstage/llama-30b-instruct) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 | | |
62
- | llama-30b-instruct-2048 (Ours, Local Reproduction) | 67.0 | 64.9 | 85.0 | 61.9 | 56.0 | | 6.88125 |
63
- | llama-30b-instruct (Ours, Open LLM Leaderboard) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 | | |
64
- | [llama-65b](https://huggingface.co/upstage/llama-65b-instruct) | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 | | |
65
- | falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 | | |
66
 
67
  ### Scripts
68
  - Prepare evaluation environments:
 
51
 
52
 
53
  ### Main Results
54
+ | Model | H4(Avg) | ARC | HellaSwag | MMLU | TruthfulQA | | MT_Bench |
55
+ |--------------------------------------------------------------------|----------|----------|----------|------|----------|-|-------------|
56
+ | **[Llama-2-70b-instruct-v2](https://huggingface.co/upstage/Llama-2-70b-instruct-v2)**(***Ours***, ***Local Reproduction***) | **72.7** | **71.6** | **87.7** | 69.7 | **61.6** | | **7.44063** |
57
+ | [Llama-2-70b-instruct](https://huggingface.co/upstage/Llama-2-70b-instruct) (Ours, Open LLM Leaderboard) | 72.3 | 70.9 | 87.5 | **69.8** | 61 | | 7.24375 |
58
+ | [llama-65b-instruct](https://huggingface.co/upstage/llama-65b-instruct) (Ours, Open LLM Leaderboard) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 | | |
59
+ | Llama-2-70b-hf | 67.3 | 67.3 | 87.3 | 69.8 | 44.9 | | |
60
+ | [llama-30b-instruct-2048](https://huggingface.co/upstage/llama-30b-instruct-2048) (Ours, Open LLM Leaderboard) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 | | |
61
+ | [llama-30b-instruct](https://huggingface.co/upstage/llama-30b-instruct) (Ours, Open LLM Leaderboard) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 | | |
62
+ | llama-65b | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 | | |
63
+ | falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 | | |
 
 
64
 
65
  ### Scripts
66
  - Prepare evaluation environments: