Update README.md
Browse files
README.md
CHANGED
@@ -30,21 +30,21 @@ The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-
|
|
30 |
## Evaluation
|
31 |
The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results:
|
32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
-
|
35 |
-
|:-----------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:---------|
|
36 |
-
| **MAmmoTH2-7B** | 26.7 | 34.2 | 67.4 | 34.8 | 60.6 | 60.0 | 81.8 | 52.2 |
|
37 |
-
| **MAmmoTH2-8B** | 29.7 | 33.4 | 67.9 | 38.4 | 61.0 | 60.8 | 81.0 | 53.1 |
|
38 |
-
| **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 |
|
39 |
-
| **MAmmoTH2-7B-Plus** | 29.2 | 45.0 | 84.7 | 36.8 | 64.5 | 63.1 | 83.0 | 58.0 |
|
40 |
-
| **MAmmoTH2-8B-Plus** | 32.5 | 42.8 | 84.1 | 37.3 | 65.7 | 67.8 | 83.4 | 59.1 |
|
41 |
-
| **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 |
|
42 |
-
|
43 |
|
44 |
|
45 |
## Usage
|
46 |
You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution.
|
47 |
-
Check our Github repo for more advanced use:
|
48 |
|
49 |
## Limitations
|
50 |
We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively.
|
|
|
30 |
## Evaluation
|
31 |
The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results:
|
32 |
|
33 |
+
| **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** |
|
34 |
+
|:---------------------------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:--------|
|
35 |
+
| **MAmmoTH2-7B** (Updated) | 29.0 | 36.7 | 68.4 | 32.4 | 62.4 | 58.6 | 81.7 | 52.7 |
|
36 |
+
| **MAmmoTH2-8B** (Updated) | 30.3 | 35.8 | 70.4 | 35.2 | 64.2 | 62.1 | 82.2 | 54.3 |
|
37 |
+
| **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 |
|
38 |
+
| **MAmmoTH2-7B-Plus** (Updated) | 31.2 | 46.0 | 84.6 | 33.8 | 63.8 | 63.3 | 84.4 | 58.1 |
|
39 |
+
| **MAmmoTH2-8B-Plus** (Updated) | 31.5 | 43.0 | 85.2 | 35.8 | 66.7 | 69.7 | 84.3 | 59.4 |
|
40 |
+
| **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 |
|
41 |
|
42 |
+
To reproduce our results, please refer to https://github.com/TIGER-AI-Lab/MAmmoTH2/tree/main/math_eval.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
|
44 |
|
45 |
## Usage
|
46 |
You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution.
|
47 |
+
Check our Github repo for more advanced use: https://github.com/TIGER-AI-Lab/MAmmoTH2
|
48 |
|
49 |
## Limitations
|
50 |
We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively.
|