output
stringlengths
6
11
instruction
stringlengths
10
17
-2095.5772
-42.958 * 48.782
1682.8059
-31.668 * -53.139
21.5090
31.719 - 10.21
-0.7251
44.671 / -61.608
-25.5190
-76.12 + 50.601
-4181.0814
74.37 * -56.22
359.2532
90.174 * 3.984
109.1430
27.586 + 81.557
52.7340
63.556 + -10.822
0.9733
-32.122 / -33.004
-1.7571
67.793 / -38.582
58.5940
9.797 + 48.797
-0.2610
-21.162 / 81.069
100.9430
68.407 + 32.536
83.8220
84.869 + -1.047
43.0850
68.169 - 25.084
-1.2180
57.983 / -47.604
0.8896
-36.847 / -41.418
105.0280
97.366 - -7.662
-43.5830
51.399 - 94.982
-149.5800
-84.14 + -65.44
-154.6100
-78.816 - 75.794
8329.3061
83.568 * 99.671
-2.0437
-80.28 / 39.281
-123.0020
-84.69 - 38.312
-2.0070
-40.409 + 38.402
-69.3460
-91.031 - -21.685
-27.1500
0.531 * -51.13
108.1273
5.917 * 18.274
26.1310
-57.754 - -83.885
0.4921
-47.944 / -97.427
-7866.4768
-95.46 * 82.406
-79.8700
-88.697 + 8.827
12.3050
-3.72 + 16.025
-3042.5057
-62.85 * 48.409
-1.7192
62.133 / -36.141
-52.7800
-35.313 + -17.467
1718.2563
98.864 * 17.38
-6189.0894
91.333 * -67.764
-0.5680
32.984 / -58.068
43.1437
36.875 * 1.17
-79.8020
-15.792 + -64.01
81.9540
99.503 + -17.549
-62.0860
7.569 - 69.655
-0.2849
21.143 / -74.218
1.2532
-30.059 / -23.985
-1.2177
15.076 / -12.381
66.3380
83.68 + -17.342
-0.3884
5.644 / -14.533
-1705.9482
26.1 * -65.362
91.0990
20.866 - -70.233
-0.6866
51.956 / -75.67
-18.7820
-2.661 - 16.121
-13.5768
-93.069 / 6.855
55.9180
81.782 - 25.864
61.0040
34.907 - -26.097
-3764.8120
78.499 * -47.96
-10.9180
-48.675 + 37.757
0.5566
54.947 / 98.716
17.6422
-9.368 / -0.531
-178.0970
-97.45 - 80.647
125.3040
29.514 - -95.79
-0.1483
2.988 / -20.148
46.2640
89.847 + -43.583
-53.1800
-15.116 + -38.064
1.0727
-93.277 / -86.958
0.8460
61.35 / 72.519
-851.7987
-15.765 * 54.031
-131.8140
-71.291 - 60.523
75.4850
17.638 - -57.847
-105.5280
-92.552 - 12.976
-5955.6289
-99.036 * 60.136
-149.7380
-54.404 - 95.334
0.8360
76.467 / 91.465
60.0210
78.632 - 18.611
-36.9032
-0.906 * 40.732
18.4520
95.437 - 76.985
5.7085
-0.505 * -11.304
-67.4450
-96.934 - -29.489
-47.0170
35.354 - 82.371
-74.0690
-11.459 + -62.61
78.9520
22.316 + 56.636
100.9080
67.292 - -33.616
-0.5634
-49.559 / 87.967
-118.3020
-50.629 + -67.673
1.2088
90.002 / 74.455
-0.1792
11.824 / -65.99
3577.5442
-82.131 * -43.559
4158.3870
97.12 * 42.817
12.3880
1.722 + 10.666
-97.1820
2.05 - 99.232
64.2810
17.459 - -46.822
-154.3690
-74.553 + -79.816
-1000.3881
21.833 * -45.82
0.6343
-38.381 / -60.512
-6541.3453
66.271 * -98.706
-2504.0068
41.953 * -59.686
-1560.3832
96.69 * -16.138
0.5843
37.572 / 64.308
2.7330
-68.838 / -25.188

Simple Math: 2+2=4 -1=3 (LoLo: Learning Only Logical Operations)

Just like my teacher gave me homework, i thought maybe we can also add some of these basics on the trainings of our models.

It was created with very simple code that is in the repo, if you add more complex operations and so.. please share the code :D thank you

Current Code Version: 20240127.fblgit (A modification over @win10 for progressive and DPO operation) LoLo: Learning Only Logical Operations

Does it Works?

34BEAGLES Evaluation:

hf (pretrained=/data/models/UNA-34Beagles-v1-final,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto (8)
|    Tasks     |Version|Filter|n-shot| Metric |Value |   |Stderr|
|--------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge |Yaml   |none  |    25|acc     |0.7039|±  |0.0133|
|              |       |none  |    25|acc_norm|0.7321|±  |0.0129|
|truthfulqa_mc2|Yaml   |none  |     0|acc     |0.7387|±  |0.0141|

hf (pretrained=/data/models/UNA-34Beagles-v1-final,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto
|Tasks|Version|  Filter  |n-shot|  Metric   |Value |   |Stderr|
|-----|-------|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|Yaml   |get-answer|     5|exact_match|0.6399|±  |0.0132|

|      Groups      |Version|Filter|n-shot|Metric|Value |   |Stderr|
|------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu              |N/A    |none  |     0|acc   |0.7477|±  |0.1079|
| - humanities     |N/A    |none  |     0|acc   |0.7188|±  |0.0855|
| - other          |N/A    |none  |     0|acc   |0.7950|±  |0.1057|
| - social_sciences|N/A    |none  |     0|acc   |0.8297|±  |0.0664|
| - stem           |N/A    |none  |     0|acc   |0.6641|±  |0.1291|

34BEAGLES-MATH Evaluation

hf (pretrained=/data/models/34BeaglesMath-v1,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto
|Tasks|Version|  Filter  |n-shot|  Metric   |Value |   |Stderr|
|-----|-------|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|Yaml   |get-answer|     5|exact_match|0.6505|±  |0.0131|

hf (pretrained=/data/models/34BeaglesMath-v1,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto (8)
|    Tasks     |Version|Filter|n-shot| Metric |Value |   |Stderr|
|--------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge |Yaml   |none  |    25|acc     |0.7090|±  |0.0133|
|              |       |none  |    25|acc_norm|0.7329|±  |0.0129|
|truthfulqa_mc2|Yaml   |none  |     0|acc     |0.7378|±  |0.0141|

|      Groups      |Version|Filter|n-shot|Metric|Value |   |Stderr|
|------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu              |N/A    |none  |     0|acc   |0.7524|±  |0.1045|
| - humanities     |N/A    |none  |     0|acc   |0.7307|±  |0.0846|
| - other          |N/A    |none  |     0|acc   |0.7937|±  |0.1029|
| - social_sciences|N/A    |none  |     0|acc   |0.8274|±  |0.0667|
| - stem           |N/A    |none  |     0|acc   |0.6708|±  |0.1236|

But it gets better, because when increasing length and complexity, the marks are even superior:

|Tasks|Version|  Filter  |n-shot|  Metric   |Value |   |Stderr|
|-----|-------|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|Yaml   |get-answer|     5|exact_match|0.6611|±  | 0.013|

On a 3.20% GSM Improvement compared to its base model.

Note to contributors:

thank you to those contributing on the experiment with beautiful commits and good spirit

  • Feel free to contribute on the readme Evaluation tests.
  • Lets aim to build an ablation & paper together. All contributors will be cited.

Versions

27.01.24 Added new code to generate the dataset, seed 42 and now also generates DPO.
24.01.24 Added gradual complexity on a separate script
20-23.01.24 Multiple contributions with operations and increased complexity on the main generator script.

Citations

If you use Simple Math o train your model, please cite on the modelcard or the paper.

@misc{simplemath,
  title={Simple-Math: 2+2=4 4-1=3}, 
  author={Xavier Murias},
  year={2024},
  publisher = {Juanako.AI},
  journal = {HuggingFace repository},
  howpublished = {\url{https://huggingface.co/datasets/fblgit/simple-math}},
}
Downloads last month
524
Edit dataset card

Models trained or fine-tuned on fblgit/simple-math