Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
3 |
---
|
|
|
1 |
+
CAMEL-13B-Combined-Data is a chat large language model obtained by finetuning LLaMA-13B model on a total of 229K conversations collected through our [CAMEL](https://arxiv.org/abs/2303.17760) framework, 100K English public conversations from ShareGPT that can be found [here](https://github.com/lm-sys/FastChat/issues/90#issuecomment-1493250773), and 52K instructions from Alpaca dataset that can be found [here](https://github.com/tatsu-lab/stanford_alpaca/blob/761dc5bfbdeeffa89b8bff5d038781a4055f796a/alpaca_data.json). We evaluate our model offline using EleutherAI's language model evaluation harness used by Huggingface's Open LLM Benchmark. CAMEL<sup>*</sup>-13B scores an average of **58.1**, outperfroming LLaMA-30B (58.3), and on par with LLaMA-65B(58.1)!
|
2 |
+
|
3 |
+
| Model | size | ARC-C (25 shots, acc_norm) | HellaSwag (10 shots, acc_norm) | MMLU (5 shots, acc_norm) | TruthfulQA (0 shot, mc2) | Average | Delta |
|
4 |
+
|-------------|:----:|:---------------------------:|:-------------------------------:|:-------------------------:|:-------------------------:|:-------:|-------|
|
5 |
+
| LLaMA | 13B | 50.8 | 78.9 | 37.7 | 39.9 | 51.8 | - |
|
6 |
+
| Vicuna | 13B | 47.4 | 75.2 | 39.6 | 49.8 | 53.7 | 1.9 |
|
7 |
+
| CAMEL<sup>*</sup> | 13B | 55.5 | 79.3 | 50.3 | 47.3 | 58.1 | 6.3 |
|
8 |
+
| LLaMA | 65B | 57.8 | 84.2 | 48.8 | 42.3 | **58.3** | 6.5 |
|
9 |
+
|
10 |
+
---
|
11 |
+
license: cc-by-nc-4.0
|
12 |
+
---
|
13 |
+
|
14 |
---
|
15 |
license: cc-by-nc-4.0
|
16 |
---
|