Update README.md
Browse files
README.md
CHANGED
@@ -32,7 +32,7 @@ We evaluate the model on [RewardBench](https://github.com/allenai/reward-bench):
|
|
32 |
| Model | Score | Chat | Chat Hard | Safety | Reasoning |
|
33 |
|------------------|-------|-------|-----------|--------|-----------|
|
34 |
| **[Llama 3.1 Tulu 2 8b UF RM](https://huggingface.co/allenai/llama-3.1-tulu-2-8b-uf-mean-rm) (this model)** | 73.3 | 98.0 | 59.6 | 60.6 | 74.7 |
|
35 |
-
| [Llama 3.1 Tulu 2 70b UF RM](https://huggingface.co/allenai/llama-3.1-tulu-2-70b-uf-mean-rm) |
|
36 |
|
37 |
|
38 |
## Model description
|
|
|
32 |
| Model | Score | Chat | Chat Hard | Safety | Reasoning |
|
33 |
|------------------|-------|-------|-----------|--------|-----------|
|
34 |
| **[Llama 3.1 Tulu 2 8b UF RM](https://huggingface.co/allenai/llama-3.1-tulu-2-8b-uf-mean-rm) (this model)** | 73.3 | 98.0 | 59.6 | 60.6 | 74.7 |
|
35 |
+
| [Llama 3.1 Tulu 2 70b UF RM](https://huggingface.co/allenai/llama-3.1-tulu-2-70b-uf-mean-rm) | 70.2 | 96.4 | 56.4 | 65.8 | 62.3 |
|
36 |
|
37 |
|
38 |
## Model description
|