model_type
stringclasses
2 values
model
stringlengths
19
49
AVG
float64
0.14
0.67
CG
float64
0
0.6
EL
float64
0
0.59
FA
float64
0.03
0.34
HE
float64
0
0.78
MC
float64
0
0.91
MR
float64
0
0.94
MT
float64
0.47
0.86
NLI
float64
0
0.81
QA
float64
0.11
0.72
RC
float64
0.45
0.93
SUM
float64
0.01
0.14
aio_char_f1
float64
0.07
0.86
alt-e-to-j_bert_score_ja_f1
float64
0.63
0.88
alt-e-to-j_bleu_ja
float64
2.21
15.3
alt-e-to-j_comet_wmt22
float64
0.5
0.92
alt-j-to-e_bert_score_en_f1
float64
0.78
0.96
alt-j-to-e_bleu_en
float64
3.17
19.4
alt-j-to-e_comet_wmt22
float64
0.45
0.89
chabsa_set_f1
float64
0
0.59
commonsensemoralja_exact_match
float64
0
0.93
jamp_exact_match
float64
0
0.68
janli_exact_match
float64
0
0.87
jcommonsenseqa_exact_match
float64
0
0.96
jemhopqa_char_f1
float64
0.01
0.64
jmmlu_exact_match
float64
0
0.75
jnli_exact_match
float64
0
0.9
jsem_exact_match
float64
0
0.81
jsick_exact_match
float64
0
0.87
jsquad_char_f1
float64
0.45
0.93
jsts_pearson
float64
-0.09
0.9
jsts_spearman
float64
-0.09
0.88
kuci_exact_match
float64
0
0.84
mawps_exact_match
float64
0
0.94
mbpp_code_exec
float64
0
0.6
mbpp_pylint_check
float64
0
0.97
mmlu_en_exact_match
float64
0
0.81
niilc_char_f1
float64
0.1
0.66
wiki_coreference_set_f1
float64
0
0.1
wiki_dependency_set_f1
float64
0
0.47
wiki_ner_set_f1
float64
0
0.13
wiki_pas_set_f1
float64
0
0.11
wiki_reading_char_f1
float64
0.17
0.91
wikicorpus-e-to-j_bert_score_ja_f1
float64
0.59
0.85
wikicorpus-e-to-j_bleu_ja
float64
1.81
16.8
wikicorpus-e-to-j_comet_wmt22
float64
0.42
0.85
wikicorpus-j-to-e_bert_score_en_f1
float64
0.77
0.91
wikicorpus-j-to-e_bleu_en
float64
2.61
12.9
wikicorpus-j-to-e_comet_wmt22
float64
0.41
0.77
xlsum_ja_bert_score_ja_f1
float64
0.59
0.72
xlsum_ja_bleu_ja
float64
0.4
4.15
xlsum_ja_rouge1
float64
8.84
37.7
xlsum_ja_rouge2
float64
1.35
14.4
xlsum_ja_rouge2_scaling
float64
0.01
0.14
xlsum_ja_rougeLsum
float64
7.54
30.3
architecture
stringclasses
5 values
precision
stringclasses
2 values
license
stringclasses
7 values
params
float64
1.87
70.6
likes
int64
0
3.06k
revision
stringclasses
1 value
num_few_shot
int64
0
4
add_special_tokens
stringclasses
1 value
llm_jp_eval_version
stringclasses
1 value
vllm_version
stringclasses
1 value
⭕ : instruction-tuned
llm-jp/llm-jp-3-1.8b-instruct
0.3923
0.002
0.4046
0.1919
0.2842
0.3697
0.418
0.7999
0.4786
0.4957
0.834
0.0371
0.6003
0.8445
10.6032
0.8842
0.9337
12.773
0.8448
0.4046
0.5421
0.3822
0.4847
0.3101
0.4307
0.2895
0.5838
0.6976
0.2446
0.834
0.4386
0.5179
0.257
0.418
0.002
0.0141
0.2789
0.4561
0.0288
0.172
0.0177
0.0139
0.7272
0.7723
7.495
0.7693
0.8829
9.2901
0.7011
0.6449
1.1871
18.1463
3.7324
0.0371
15.2299
LlamaForCausalLM
bfloat16
apache-2.0
1.868
20
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
llm-jp/llm-jp-3-1.8b-instruct
0.2147
0.002
0
0.0442
0.1461
0.2648
0
0.7888
0.2062
0.1956
0.6768
0.0371
0.2575
0.8361
10.3819
0.8796
0.931
12.1086
0.8421
0
0.5321
0.1724
0.0028
0
0.0767
0.0263
0.2424
0.0265
0.587
0.6768
-0.0876
-0.0888
0.2624
0
0.002
0.0141
0.2659
0.2526
0
0
0
0
0.2208
0.7485
6.3026
0.7504
0.873
7.9213
0.683
0.6449
1.1871
18.1463
3.7324
0.0371
15.2299
LlamaForCausalLM
bfloat16
apache-2.0
1.868
20
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
llm-jp/llm-jp-3-13b-instruct
0.5462
0.0843
0.4843
0.2782
0.4739
0.8438
0.71
0.8393
0.669
0.6097
0.9014
0.1143
0.8415
0.8618
13.1859
0.9068
0.9476
14.866
0.8726
0.4843
0.8903
0.5575
0.6403
0.8928
0.3473
0.4651
0.7309
0.7481
0.6682
0.9014
0.7957
0.7949
0.7484
0.71
0.0843
0.2731
0.4827
0.6403
0.0396
0.4027
0.0354
0.047
0.8665
0.82
10.6349
0.8302
0.9031
10.9633
0.7478
0.7015
2.9771
29.3132
11.4455
0.1143
25.0207
LlamaForCausalLM
bfloat16
apache-2.0
13.708
16
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
llm-jp/llm-jp-3-13b-instruct
0.412
0.0843
0.0104
0.1583
0.2727
0.7287
0.374
0.8316
0.6881
0.3822
0.8868
0.1143
0.6447
0.8561
12.9677
0.9014
0.9483
15.0672
0.8752
0.0104
0.7492
0.5431
0.6014
0.8704
0.0655
0.1519
0.7896
0.7424
0.7642
0.8868
0.8204
0.8177
0.5663
0.374
0.0843
0.2731
0.3935
0.4366
0
0.0025
0
0
0.7892
0.7918
8.7021
0.8163
0.8927
9.9921
0.7335
0.7015
2.9771
29.3132
11.4455
0.1143
25.0207
LlamaForCausalLM
bfloat16
apache-2.0
13.708
16
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
llm-jp/llm-jp-3-3.7b-instruct
0.4597
0
0.3691
0.2398
0.3628
0.5839
0.58
0.822
0.551
0.5709
0.8488
0.1283
0.7141
0.8504
10.634
0.8959
0.9388
13.2399
0.8562
0.3691
0.7791
0.4282
0.5236
0.5407
0.4693
0.3555
0.6845
0.7374
0.3816
0.8488
0.7156
0.6453
0.4321
0.58
0
0
0.37
0.5293
0.0142
0.3409
0.0088
0.0575
0.7774
0.7961
8.5097
0.8052
0.8954
9.6317
0.7307
0.7119
2.9956
35.8746
12.8065
0.1283
29.3662
LlamaForCausalLM
bfloat16
apache-2.0
3.783
7
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
llm-jp/llm-jp-3-3.7b-instruct
0.3016
0
0
0.115
0.1869
0.4517
0.01
0.8106
0.5867
0.2262
0.8022
0.1283
0.3361
0.8374
10.3631
0.8874
0.939
13.6521
0.8578
0
0.5321
0.3649
0.6028
0.4209
0.0591
0.0181
0.6002
0.7115
0.6539
0.8022
0.2802
0.3889
0.4022
0.01
0
0
0.3556
0.2835
0
0
0
0
0.5751
0.7703
7.1206
0.7906
0.8781
8.625
0.7064
0.7119
2.9956
35.8746
12.8065
0.1283
29.3662
LlamaForCausalLM
bfloat16
apache-2.0
3.783
7
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
Qwen/Qwen2.5-32B-Instruct
0.6553
0.5281
0.5894
0.2737
0.7757
0.8966
0.944
0.8479
0.8106
0.541
0.9047
0.097
0.553
0.8644
13.2738
0.9081
0.9554
17.7737
0.8859
0.5894
0.8975
0.6724
0.8431
0.958
0.5672
0.7515
0.8973
0.7835
0.8569
0.9047
0.8895
0.877
0.8343
0.944
0.5281
0.755
0.8
0.5029
0.0543
0.3837
0
0.1104
0.8204
0.8291
10.9975
0.8389
0.9045
11.1213
0.7585
0.6926
2.7959
25.855
9.7054
0.097
22.5323
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
120
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
Qwen/Qwen2.5-32B-Instruct
0.5443
0.5281
0.107
0.1453
0.568
0.8739
0.79
0.8386
0.7647
0.3873
0.8871
0.097
0.4392
0.8489
11.3776
0.9009
0.9511
15.766
0.8797
0.107
0.9046
0.6494
0.7944
0.9303
0.2681
0.5561
0.82
0.798
0.7615
0.8871
0.8951
0.8761
0.7869
0.79
0.5281
0.755
0.58
0.4547
0.0281
0.0071
0.0354
0.0058
0.6499
0.8004
8.7234
0.8268
0.8969
9.5439
0.7471
0.6926
2.7959
25.855
9.7054
0.097
22.5323
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
120
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
Qwen/Qwen2.5-7B-Instruct
0.5304
0.0221
0.498
0.1822
0.6385
0.8416
0.796
0.7302
0.7371
0.4006
0.8899
0.0983
0.3771
0.8222
11.2802
0.826
0.9327
15.5139
0.8347
0.498
0.859
0.6034
0.7431
0.9151
0.4305
0.6103
0.8295
0.6989
0.8106
0.8899
0.8729
0.8468
0.7507
0.796
0.0221
0.012
0.6666
0.3941
0.0449
0.2922
0.0354
0.0773
0.4614
0.7288
8.1478
0.6153
0.8592
9.0236
0.6446
0.692
2.1112
29.1184
9.8419
0.0983
23.7063
Qwen2ForCausalLM
bfloat16
apache-2.0
7.616
271
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
Qwen/Qwen2.5-7B-Instruct
0.2896
0.0221
0.2081
0.0822
0.4418
0.496
0.292
0.6415
0.2336
0.2228
0.4478
0.0983
0.2136
0.7191
7.042
0.7345
0.831
11.0762
0.5875
0.2081
0.0143
0.5029
0
0.8213
0.2105
0.4936
0.3246
0
0.3408
0.4478
0.875
0.8434
0.6524
0.292
0.0221
0.012
0.3899
0.2442
0.01
0.0234
0
0
0.3778
0.6795
6.3014
0.6785
0.8165
7.1661
0.5654
0.692
2.1112
29.1184
9.8419
0.0983
23.7063
Qwen2ForCausalLM
bfloat16
apache-2.0
7.616
271
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
weblab-GENIAC/Tanuki-8B-dpo-v1.0
0.3779
0.0783
0.398
0.1031
0.3067
0.4096
0.3
0.8275
0.5318
0.4266
0.6665
0.1083
0.6303
0.8351
10.0227
0.899
0.938
13.2353
0.8598
0.398
0.6964
0.4368
0.5431
0.2717
0.2847
0.3152
0.5965
0.661
0.4216
0.6665
0.3897
0.3894
0.2606
0.3
0.0783
0.2088
0.2983
0.3647
0
0.0212
0
0.0096
0.4849
0.7795
7.8083
0.8166
0.8903
8.5388
0.7346
0.7025
2.3051
34.8952
10.8447
0.1083
28.3762
LlamaForCausalLM
bfloat16
apache-2.0
7.512
29
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
weblab-GENIAC/Tanuki-8B-dpo-v1.0
0.2122
0.0783
0
0.0873
0
0.1774
0
0.829
0.414
0.1057
0.534
0.1083
0.0853
0.8373
10.8897
0.9
0.9427
12.9964
0.8684
0
0.5321
0.3563
0.3861
0
0.098
0
0.1553
0.5997
0.5726
0.534
0
0
0
0
0.0783
0.2088
0
0.1337
0
0
0
0
0.4366
0.7729
6.9734
0.8122
0.8905
7.4821
0.7355
0.7025
2.3051
34.8952
10.8447
0.1083
28.3762
LlamaForCausalLM
bfloat16
apache-2.0
7.512
29
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.1
0.62
0.0201
0.5529
0.3371
0.7313
0.9057
0.932
0.8565
0.7605
0.7131
0.9252
0.0856
0.8606
0.876
14.5217
0.9157
0.9603
19.3015
0.8924
0.5529
0.9296
0.6523
0.8069
0.9589
0.619
0.7063
0.7477
0.7923
0.8033
0.9252
0.878
0.8477
0.8287
0.932
0.0201
0.0422
0.7562
0.6597
0.0906
0.4663
0.1327
0.0903
0.9053
0.8501
13.863
0.8529
0.9137
12.9227
0.7651
0.6804
3.3097
20.1389
8.5601
0.0856
18.0061
LlamaForCausalLM
bfloat16
llama3.1;gemma
70.554
3
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.1
0.507
0.0201
0.2715
0.1917
0.6465
0.8642
0.744
0.7167
0.7081
0.5798
0.749
0.0856
0.7149
0.8644
13.3248
0.9114
0.8297
17.2139
0.5661
0.2715
0.9216
0.6351
0.7458
0.9267
0.4623
0.6058
0.576
0.7753
0.8084
0.749
0.8408
0.7986
0.7443
0.744
0.0201
0.0422
0.6873
0.5622
0.0163
0.0057
0.0619
0.0075
0.867
0.8134
10.2308
0.8336
0.8206
10.8013
0.5556
0.6804
3.3097
20.1389
8.5601
0.0856
18.0061
LlamaForCausalLM
bfloat16
llama3.1;gemma
70.554
3
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2
0.565
0.0321
0.5385
0.2951
0.5743
0.8428
0.72
0.8484
0.7158
0.6428
0.9167
0.0883
0.7629
0.8709
13.8312
0.9115
0.9537
16.8957
0.8828
0.5385
0.8823
0.5345
0.725
0.9276
0.559
0.5507
0.8073
0.7601
0.752
0.9167
0.7888
0.7604
0.7184
0.72
0.0321
0.1365
0.5978
0.6066
0.0352
0.395
0.0885
0.1022
0.8549
0.835
11.7829
0.8406
0.9092
11.5867
0.7587
0.6827
2.8084
21.4684
8.8428
0.0883
18.9717
LlamaForCausalLM
bfloat16
llama3.1;gemma
8.03
6
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2
0.4292
0.0321
0.1047
0.1583
0.4096
0.799
0.294
0.8335
0.6602
0.4712
0.8707
0.0883
0.6063
0.8593
12.1636
0.905
0.9508
15.8558
0.8781
0.1047
0.8682
0.5172
0.6528
0.8838
0.4527
0.3663
0.6167
0.7601
0.7542
0.8707
0.8008
0.7914
0.6449
0.294
0.0321
0.1365
0.4529
0.3547
0.0026
0.0004
0.0088
0
0.7798
0.8002
8.6761
0.818
0.8933
10.2604
0.7328
0.6827
2.8084
21.4684
8.8428
0.0883
18.9717
LlamaForCausalLM
bfloat16
llama3.1;gemma
8.03
6
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
rinna/gemma-2-baku-2b-it
0.4477
0
0.4434
0.1815
0.4444
0.7059
0.536
0.8302
0.4111
0.3726
0.8741
0.1256
0.4408
0.8527
10.6366
0.8981
0.9441
13.6152
0.8697
0.4434
0.7004
0.4023
0.5639
0.8534
0.2712
0.4084
0.4129
0.2986
0.3779
0.8741
0.6961
0.7189
0.5639
0.536
0
0
0.4805
0.4057
0.0348
0.0877
0.0708
0.0355
0.6786
0.7845
7.0376
0.8119
0.8944
8.8013
0.741
0.7177
2.2358
37.6981
12.5686
0.1256
30.0896
Gemma2ForCausalLM
float16
gemma
2.614
14
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
rinna/gemma-2-baku-2b-it
0.2449
0
0.0093
0.1214
0.0013
0.2852
0.008
0.823
0.3259
0.2184
0.7761
0.1256
0.3312
0.8416
9.5574
0.8942
0.9392
11.6762
0.8618
0.0093
0.8021
0.3649
0.5014
0
0.0083
0.0025
0.2991
0.1629
0.3012
0.7761
0.2881
0.2782
0.0533
0.008
0
0
0
0.3157
0
0
0
0
0.6068
0.7784
6.6772
0.7992
0.8924
7.8981
0.7369
0.7177
2.2358
37.6981
12.5686
0.1256
30.0896
Gemma2ForCausalLM
float16
gemma
2.614
14
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
cyberagent/calm3-22b-chat
0.5612
0
0.5404
0.247
0.5592
0.8527
0.684
0.8464
0.7519
0.6677
0.9118
0.1121
0.8433
0.8705
13.6161
0.9117
0.9553
16.6257
0.8857
0.5404
0.8677
0.5805
0.7875
0.933
0.5109
0.5408
0.8583
0.7715
0.7617
0.9118
0.8637
0.86
0.7575
0.684
0
0
0.5776
0.6489
0.001
0.2251
0.1239
0.0316
0.8532
0.8177
9.4163
0.8375
0.9039
10.3951
0.7506
0.7019
2.5206
31.9179
11.2189
0.1121
26.8509
LlamaForCausalLM
bfloat16
apache-2.0
22.543
67
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
cyberagent/calm3-22b-chat
0.4792
0
0.2776
0.1668
0.4546
0.7844
0.578
0.8388
0.6843
0.4854
0.8887
0.1121
0.6197
0.8517
12.6994
0.907
0.9535
16.1239
0.8829
0.2776
0.8297
0.5517
0.6986
0.8954
0.3577
0.4818
0.7683
0.7689
0.6341
0.8887
0.8853
0.8589
0.6281
0.578
0
0
0.4275
0.4788
0.0019
0.0016
0.0022
0
0.8285
0.7865
7.7148
0.8174
0.9002
9.9171
0.748
0.7019
2.5206
31.9179
11.2189
0.1121
26.8509
LlamaForCausalLM
bfloat16
apache-2.0
22.543
67
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
elyza/Llama-3-ELYZA-JP-8B
0.556
0.2972
0.5105
0.27
0.4999
0.8048
0.712
0.8285
0.6305
0.5457
0.9112
0.1058
0.6179
0.8476
11.1585
0.8843
0.9453
14.1463
0.8677
0.5105
0.8166
0.4856
0.6069
0.8981
0.5168
0.4761
0.7555
0.6654
0.6389
0.9112
0.8215
0.7794
0.6995
0.712
0.2972
0.7269
0.5236
0.5023
0.0209
0.3524
0.0885
0.0848
0.8035
0.8125
9.6534
0.8214
0.8976
10.308
0.7407
0.6986
2.4239
28.7904
10.5753
0.1058
24.7182
LlamaForCausalLM
bfloat16
llama3
8.03
76
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
elyza/Llama-3-ELYZA-JP-8B
0.4106
0.2972
0.0401
0.1057
0.3654
0.5815
0.446
0.8151
0.549
0.3639
0.8475
0.1058
0.4164
0.8396
9.0926
0.886
0.94
12.3152
0.8587
0.0401
0.7778
0.3678
0.4972
0.563
0.3615
0.3211
0.5415
0.7096
0.629
0.8475
0.4597
0.4683
0.4036
0.446
0.2972
0.7269
0.4098
0.3137
0
0.0021
0
0.0006
0.5257
0.7819
7.1661
0.7972
0.89
9.2311
0.7182
0.6986
2.4239
28.7904
10.5753
0.1058
24.7182
LlamaForCausalLM
bfloat16
llama3
8.03
76
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
Rakuten/RakutenAI-7B-chat
0.4627
0.0863
0.3989
0.1521
0.4763
0.8122
0.41
0.8169
0.5524
0.418
0.8645
0.1017
0.5386
0.8624
13.0202
0.8952
0.9509
16.8546
0.8778
0.3989
0.7708
0.4713
0.6458
0.9053
0.26
0.4377
0.7449
0.4444
0.4554
0.8645
0.7956
0.7642
0.7605
0.41
0.0863
0.3072
0.515
0.4555
0.0108
0.1118
0.0442
0.0445
0.5493
0.7868
9.6886
0.7865
0.8938
9.7413
0.708
0.6961
2.2839
28.4861
10.1816
0.1017
23.911
MistralForCausalLM
bfloat16
apache-2.0
7.373
59
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
Rakuten/RakutenAI-7B-chat
0.3167
0.0863
0.0067
0.0985
0.0041
0.7895
0.02
0.8058
0.6148
0.2592
0.6967
0.1017
0.3986
0.8555
13.0705
0.8908
0.9425
14.4667
0.8668
0.0067
0.7455
0.477
0.6583
0.9133
0.0218
0.0003
0.8509
0.5941
0.4936
0.6967
0.7406
0.7445
0.7097
0.02
0.0863
0.3072
0.0078
0.3572
0.0033
0
0
0
0.4891
0.7883
8.5147
0.7951
0.8782
9.0212
0.6703
0.6961
2.2839
28.4861
10.1816
0.1017
23.911
MistralForCausalLM
bfloat16
apache-2.0
7.373
59
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
cyberagent/calm2-7b-chat
0.3474
0
0.3317
0.1411
0.254
0.3255
0.108
0.7897
0.5213
0.5648
0.7719
0.0136
0.7041
0.8445
11.3228
0.8869
0.9368
13.4747
0.8479
0.3317
0.4679
0.4052
0.5028
0.2395
0.4973
0.2426
0.6253
0.4646
0.6085
0.7719
-0.0728
-0.0108
0.2691
0.108
0
0
0.2654
0.4929
0.0025
0.0851
0.0177
0.0324
0.5676
0.7587
6.7623
0.7498
0.8779
8.7567
0.6743
0.5871
0.4037
8.8424
1.354
0.0136
7.5443
LlamaForCausalLM
bfloat16
apache-2.0
7.009
76
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
cyberagent/calm2-7b-chat
0.1367
0
0
0.0496
0.0416
0.1877
0.004
0.4887
0.0005
0.2584
0.4593
0.0136
0.4671
0.6331
2.2109
0.4951
0.8192
4.0251
0.5615
0
0.4679
0
0
0.0831
0.0573
0.0387
0
0.0025
0
0.4593
0.0689
0.0684
0.0121
0.004
0
0
0.0446
0.2508
0
0
0
0
0.2481
0.5869
1.81
0.4237
0.7916
2.6109
0.4745
0.5871
0.4037
8.8424
1.354
0.0136
7.5443
LlamaForCausalLM
bfloat16
apache-2.0
7.009
76
main
0
False
v1.4.1
v0.6.3.post1
🟢 : pretrained
tokyotech-llm/Llama-3.1-Swallow-8B-v0.2
0.5414
0.012
0.4659
0.2799
0.5427
0.8066
0.732
0.8445
0.6617
0.6405
0.8886
0.0813
0.8015
0.8747
13.8553
0.9099
0.9546
16.517
0.883
0.4659
0.8555
0.5259
0.7125
0.9071
0.4925
0.5244
0.7773
0.7595
0.5334
0.8886
0.7765
0.7511
0.6572
0.732
0.012
0.0562
0.561
0.6276
0.007
0.3933
0.0531
0.0803
0.8658
0.8368
13.1609
0.8317
0.9093
12.1377
0.7534
0.6801
1.9566
20.1115
8.1373
0.0813
17.7927
LlamaForCausalLM
bfloat16
llama3.1
8.03
2
main
4
False
v1.4.1
v0.6.3.post1
🟢 : pretrained
tokyotech-llm/Llama-3.1-Swallow-8B-v0.2
0.3813
0.012
0.024
0.1587
0.3409
0.623
0.382
0.8264
0.4968
0.4812
0.7682
0.0813
0.6752
0.8586
12.1659
0.8998
0.9498
14.0954
0.8754
0.024
0.8034
0.4684
0.5
0.6604
0.2578
0.2906
0.3878
0.7045
0.423
0.7682
0.0806
0.0855
0.4052
0.382
0.012
0.0562
0.3911
0.5105
0.002
0.0016
0
0
0.79
0.7912
8.2683
0.8
0.8949
9.4323
0.7302
0.6801
1.9566
20.1115
8.1373
0.0813
17.7927
LlamaForCausalLM
bfloat16
llama3.1
8.03
2
main
0
False
v1.4.1
v0.6.3.post1
🟢 : pretrained
llm-jp/llm-jp-3-13b
0.4861
0
0.4856
0.256
0.4043
0.6456
0.598
0.8373
0.5444
0.6721
0.8821
0.0218
0.8577
0.8682
13.5301
0.9064
0.95
16.6943
0.8747
0.4856
0.8507
0.3822
0.5028
0.6506
0.5161
0.3979
0.6011
0.7311
0.505
0.8821
0.2931
0.2998
0.4356
0.598
0
0.9578
0.4108
0.6424
0.0068
0.3231
0.0442
0.0366
0.8693
0.828
11.1755
0.8298
0.9036
11.0157
0.7381
0.6126
0.6253
13.6586
2.1977
0.0218
11.752
LlamaForCausalLM
bfloat16
apache-2.0
13.708
0
main
4
False
v1.4.1
v0.6.3.post1
🟢 : pretrained
llm-jp/llm-jp-3-13b
0.1505
0
0.0021
0.1179
0.0097
0.0003
0.006
0.6423
0.0166
0.308
0.5313
0.0218
0.4529
0.758
7.0049
0.7205
0.8881
10.7045
0.7285
0.0021
0.0008
0
0.0806
0
0.1155
0.0175
0.0008
0
0.0016
0.5313
0.2138
0.2025
0
0.006
0
0.9578
0.0019
0.3557
0
0
0
0
0.5897
0.6568
3.6264
0.5514
0.8298
5.779
0.5687
0.6126
0.6253
13.6586
2.1977
0.0218
11.752
LlamaForCausalLM
bfloat16
apache-2.0
13.708
0
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
meta-llama/Llama-3.1-8B-Instruct
0.5319
0.3213
0.4598
0.2316
0.5808
0.7657
0.736
0.7522
0.6298
0.4205
0.8881
0.0648
0.419
0.8496
11.4306
0.8909
0.9171
15.0559
0.7837
0.4598
0.8011
0.4971
0.6722
0.8838
0.4502
0.5171
0.7173
0.6553
0.6069
0.8881
0.7479
0.7498
0.6121
0.736
0.3213
0.6205
0.6446
0.3923
0.0187
0.3105
0.1239
0.0315
0.6735
0.7956
11.0723
0.7762
0.8332
9.7048
0.5581
0.663
3.1983
15.7749
6.481
0.0648
14.169
LlamaForCausalLM
bfloat16
llama3.1
8.03
3,059
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
meta-llama/Llama-3.1-8B-Instruct
0.2303
0.3213
0.0154
0.0483
0.0038
0.3839
0.076
0.4675
0.2672
0.2386
0.6467
0.0648
0.2288
0.6335
7.9862
0.5261
0.7807
12.5089
0.4486
0.0154
0.0008
0.4425
0.3583
0.7131
0.2612
0.0028
0.2219
0.0063
0.3071
0.6467
0.6254
0.5865
0.4377
0.076
0.3213
0.6205
0.0047
0.2259
0
0.008
0
0
0.2334
0.6074
7.3187
0.4883
0.7688
7.4026
0.4071
0.663
3.1983
15.7749
6.481
0.0648
14.169
LlamaForCausalLM
bfloat16
llama3.1
8.03
3,059
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
CohereForAI/aya-23-8B
0.484
0
0.3829
0.1502
0.4958
0.7439
0.578
0.8245
0.6414
0.4883
0.8982
0.1213
0.5051
0.8583
12.3567
0.9043
0.9507
16.5267
0.8763
0.3829
0.748
0.5345
0.6611
0.8731
0.4958
0.4671
0.6187
0.7153
0.6773
0.8982
0.7539
0.7297
0.6105
0.578
0
0
0.5244
0.4639
0.0198
0.1152
0.0354
0.0154
0.5652
0.7963
8.9616
0.79
0.8934
9.7535
0.7274
0.7072
2.6432
30.8244
12.1131
0.1213
26.0328
CohereForCausalLM
float16
cc-by-nc-4.0
8.028
392
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
CohereForAI/aya-23-8B
0.31
0
0.0009
0.0738
0.0287
0.5859
0.176
0.8075
0.4615
0.3263
0.8285
0.1213
0.4499
0.8423
10.334
0.893
0.9461
14.353
0.8684
0.0009
0.8041
0.3534
0.5222
0.5371
0.1938
0.05
0.2091
0.6742
0.5486
0.8285
0.504
0.4839
0.4165
0.176
0
0
0.0075
0.3353
0.0054
0
0
0
0.3636
0.7675
7.5791
0.7666
0.8807
8.2645
0.7021
0.7072
2.6432
30.8244
12.1131
0.1213
26.0328
CohereForCausalLM
float16
cc-by-nc-4.0
8.028
392
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
CohereForAI/aya-23-35B
0.5742
0.1888
0.5045
0.2595
0.6038
0.8407
0.692
0.8443
0.7218
0.5991
0.9175
0.1443
0.7088
0.8668
13.519
0.9097
0.9564
17.9446
0.8862
0.5045
0.8236
0.569
0.7833
0.9446
0.5383
0.5688
0.7091
0.7538
0.7936
0.9175
0.8689
0.8381
0.754
0.692
0.1888
0.3936
0.6389
0.5501
0.022
0.3005
0.115
0.043
0.8169
0.8337
11.9736
0.8333
0.9054
11.0157
0.7482
0.7212
4.1534
34.0784
14.4071
0.1443
29.011
CohereForCausalLM
float16
cc-by-nc-4.0
34.981
264
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
CohereForAI/aya-23-35B
0.4229
0.1888
0.0323
0.1458
0.3212
0.7576
0.346
0.8366
0.6044
0.5036
0.7708
0.1443
0.6728
0.8567
12.0892
0.9046
0.9545
16.4732
0.8841
0.0323
0.8582
0.4971
0.5847
0.8418
0.3595
0.1573
0.4626
0.7128
0.7646
0.7708
0.7269
0.7625
0.5728
0.346
0.1888
0.3936
0.4852
0.4784
0
0.0017
0
0
0.7275
0.8127
9.6579
0.8224
0.8987
10.0728
0.7353
0.7212
4.1534
34.0784
14.4071
0.1443
29.011
CohereForCausalLM
float16
cc-by-nc-4.0
34.981
264
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
CohereForAI/aya-expanse-8b
0.5163
0
0.5127
0.2133
0.5521
0.8281
0.64
0.8427
0.5808
0.5008
0.9005
0.1088
0.5245
0.8658
13.3065
0.9096
0.9526
16.1479
0.8817
0.5127
0.864
0.5086
0.7
0.9223
0.4943
0.5137
0.4906
0.7393
0.4658
0.9005
0.8382
0.8038
0.6981
0.64
0
0
0.5905
0.4835
0.0103
0.2809
0.0708
0.032
0.6724
0.8152
9.3487
0.8304
0.8988
9.9183
0.7488
0.6887
2.3612
30.7157
10.8872
0.1088
24.9318
CohereForCausalLM
float16
cc-by-nc-4.0
8.028
276
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
CohereForAI/aya-expanse-8b
0.371
0
0.0296
0.133
0.227
0.6603
0.398
0.8352
0.6017
0.3012
0.7864
0.1088
0.331
0.8544
12.701
0.9078
0.953
16.887
0.8831
0.0296
0.8464
0.4626
0.6333
0.6962
0.3262
0.0387
0.5066
0.7513
0.6548
0.7864
0.8373
0.8031
0.4384
0.398
0
0
0.4153
0.2465
0.0103
0
0
0.0017
0.6531
0.7814
7.1072
0.8112
0.8903
9.1866
0.7388
0.6887
2.3612
30.7157
10.8872
0.1088
24.9318
CohereForCausalLM
float16
cc-by-nc-4.0
8.028
276
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
CohereForAI/aya-expanse-32b
0.5082
0.1426
0.535
0.2935
0.4225
0.8468
0.206
0.8553
0.5893
0.6495
0.9112
0.1388
0.7489
0.8755
14.7691
0.9167
0.9593
19.4268
0.8912
0.535
0.774
0.5891
0.1556
0.9625
0.5815
0.1367
0.8311
0.7822
0.5884
0.9112
0.8863
0.8648
0.8038
0.206
0.1426
0.255
0.7082
0.6179
0.0228
0.4033
0.0855
0.0853
0.8705
0.8462
13.016
0.8511
0.9099
11.557
0.7621
0.7167
3.6554
35.73
13.8915
0.1388
30.2516
CohereForCausalLM
float16
cc-by-nc-4.0
32.296
168
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
CohereForAI/aya-expanse-32b
0.4795
0.1426
0.2165
0.1746
0.5658
0.84
0.514
0.8516
0.3936
0.5438
0.893
0.1388
0.6921
0.8627
13.3137
0.9142
0.9581
17.7546
0.8913
0.2165
0.883
0.5747
0.0306
0.9205
0.4042
0.5916
0.5645
0.3371
0.4613
0.893
0.8629
0.8428
0.7166
0.514
0.1426
0.255
0.5399
0.535
0.0115
0
0.0177
0.0043
0.8395
0.8184
9.7295
0.8422
0.9034
10.1636
0.7585
0.7167
3.6554
35.73
13.8915
0.1388
30.2516
CohereForCausalLM
float16
cc-by-nc-4.0
32.296
168
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
google/gemma-2-2b-it
0.4051
0
0.3826
0.094
0.4379
0.5978
0.492
0.7515
0.5586
0.2522
0.8379
0.0513
0.2125
0.8197
8.5773
0.8372
0.9333
13.6777
0.8375
0.3826
0.5754
0.4253
0.5208
0.7819
0.3277
0.3838
0.5711
0.7014
0.5742
0.8379
0.5681
0.5442
0.4359
0.492
0
0
0.4921
0.2165
0.0123
0.1042
0.0973
0.0127
0.2434
0.7388
6.3148
0.6881
0.8677
7.5156
0.6434
0.6573
1.0802
16.8494
5.1204
0.0513
13.5661
Gemma2ForCausalLM
bfloat16
gemma
2.614
678
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
google/gemma-2-2b-it
0.1798
0
0
0.0487
0.0075
0.2901
0.044
0.576
0.2194
0.1283
0.6129
0.0513
0.0651
0.7304
5.5835
0.6514
0.792
3.1738
0.6637
0
0.4123
0.3276
0.0056
0.2055
0.2203
0.0031
0.145
0.3851
0.2336
0.6129
0.25
0.2473
0.2524
0.044
0
0
0.0118
0.0994
0
0
0
0
0.2434
0.6395
3.479
0.5003
0.7665
3.0008
0.4887
0.6573
1.0802
16.8494
5.1204
0.0513
13.5661
Gemma2ForCausalLM
bfloat16
gemma
2.614
678
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
google/gemma-2-9b-it
0.5206
0
0.45
0.2212
0.6149
0.801
0.75
0.8332
0.6079
0.4357
0.8883
0.1244
0.4964
0.8542
11.4808
0.8905
0.9446
15.7498
0.8611
0.45
0.8249
0.5
0.6819
0.8695
0.4161
0.5586
0.5592
0.6755
0.6231
0.8883
0.8474
0.815
0.7085
0.75
0
0
0.6713
0.3946
0.0114
0.294
0.0973
0.0792
0.624
0.8201
9.9977
0.8297
0.9044
10.9491
0.7513
0.7114
2.5273
36.1583
12.4421
0.1244
29.7442
Gemma2ForCausalLM
bfloat16
gemma
9.242
551
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
google/gemma-2-9b-it
0.3631
0
0
0.053
0.3421
0.6453
0.512
0.7368
0.6036
0.2294
0.7473
0.1244
0.196
0.8117
9.1697
0.8305
0.929
12.8368
0.8198
0
0.7219
0.5115
0.5764
0.7605
0.2548
0.4177
0.493
0.7197
0.7175
0.7473
0.7495
0.7103
0.4536
0.512
0
0
0.2665
0.2373
0.005
0.0021
0
0
0.2578
0.7155
6.0728
0.6682
0.8632
8.6791
0.6287
0.7114
2.5273
36.1583
12.4421
0.1244
29.7442
Gemma2ForCausalLM
bfloat16
gemma
9.242
551
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
google/gemma-2-27b-it
0.5809
0
0.4993
0.2738
0.676
0.8434
0.9
0.8539
0.7242
0.5932
0.8934
0.1322
0.6927
0.8709
13.2355
0.9126
0.9551
18.5224
0.8844
0.4993
0.8707
0.6552
0.8014
0.9267
0.5262
0.6278
0.6935
0.7727
0.6982
0.8934
0.9006
0.8776
0.7329
0.9
0
0
0.7243
0.5607
0.0127
0.3508
0.1062
0.0692
0.8303
0.8457
12.4513
0.8504
0.9127
12.1106
0.7681
0.7162
3.0568
35.8752
13.2254
0.1322
29.9498
Gemma2ForCausalLM
bfloat16
gemma
27.227
443
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
google/gemma-2-27b-it
0.4948
0
0.4455
0.1595
0.6023
0.721
0.798
0.8253
0.6287
0.449
0.6812
0.1322
0.5564
0.8321
11.1934
0.8824
0.9501
15.8838
0.8764
0.4455
0.7823
0.5259
0.6722
0.8365
0.3194
0.5479
0.521
0.7519
0.6726
0.6812
0.8502
0.8294
0.5442
0.798
0
0
0.6567
0.4713
0.0075
0.0012
0.0177
0
0.7712
0.7655
8.7339
0.7965
0.899
10.1975
0.7459
0.7162
3.0568
35.8752
13.2254
0.1322
29.9498
Gemma2ForCausalLM
bfloat16
gemma
27.227
443
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
meta-llama/Llama-3.1-70B-Instruct
0.6638
0.6004
0.5611
0.2644
0.7678
0.8854
0.93
0.8467
0.7821
0.6456
0.9215
0.097
0.7264
0.8675
13.7097
0.9095
0.9576
18.4691
0.8881
0.5611
0.9031
0.658
0.8556
0.95
0.6331
0.7258
0.7823
0.7992
0.8155
0.9215
0.8729
0.8407
0.8032
0.93
0.6004
0.9719
0.8099
0.5773
0.0808
0.2779
0.0442
0.0576
0.8616
0.8444
16.6273
0.8355
0.9073
12.3916
0.7537
0.6883
3.4286
23.6076
9.7127
0.097
20.9281
LlamaForCausalLM
bfloat16
llama3.1
70.554
685
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
meta-llama/Llama-3.1-70B-Instruct
0.4997
0.6004
0.2763
0.153
0.6753
0.8254
0.09
0.8243
0.7071
0.3729
0.8744
0.097
0.4822
0.8627
12.6354
0.9047
0.8888
11.0722
0.8239
0.2763
0.893
0.6063
0.7278
0.8579
0.2659
0.6543
0.5793
0.7841
0.8382
0.8744
0.8862
0.8528
0.7254
0.09
0.6004
0.9719
0.6963
0.3706
0.002
0.0257
0.0265
0.0008
0.7099
0.8326
12.7335
0.8355
0.8921
10.8621
0.7332
0.6883
3.4286
23.6076
9.7127
0.097
20.9281
LlamaForCausalLM
bfloat16
llama3.1
70.554
685
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
0.6624
0.5582
0.5734
0.2715
0.7673
0.8851
0.928
0.8495
0.7889
0.6453
0.9202
0.0984
0.727
0.8668
13.9208
0.9099
0.9577
18.3148
0.8882
0.5734
0.9051
0.6609
0.8694
0.9464
0.6205
0.7224
0.8028
0.7891
0.8224
0.9202
0.881
0.8533
0.8037
0.928
0.5582
0.8896
0.8123
0.5883
0.0984
0.3077
0.0265
0.0668
0.8582
0.8485
16.7533
0.8436
0.9059
12.0753
0.7565
0.6918
3.4088
23.8265
9.8542
0.0984
21.032
LlamaForCausalLM
bfloat16
llama3.1
70.554
1,680
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
0.5134
0.5582
0.2532
0.1623
0.6794
0.8242
0.232
0.8424
0.7365
0.3953
0.8654
0.0984
0.4611
0.8644
12.7358
0.9064
0.9481
16.8129
0.8795
0.2532
0.893
0.6408
0.7583
0.8642
0.3665
0.6614
0.6537
0.7633
0.8662
0.8654
0.8959
0.8638
0.7154
0.232
0.5582
0.8896
0.6975
0.3583
0.0024
0.0281
0.0177
0
0.7631
0.8328
12.6953
0.8383
0.9004
11.1724
0.7455
0.6918
3.4088
23.8265
9.8542
0.0984
21.032
LlamaForCausalLM
bfloat16
llama3.1
70.554
1,680
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
cyberagent/Llama-3.1-70B-Japanese-Instruct-2407
0.6705
0.5823
0.5464
0.2862
0.7522
0.9108
0.924
0.8518
0.7869
0.7169
0.9188
0.0989
0.8472
0.8747
15.2977
0.9123
0.9586
18.0168
0.8888
0.5464
0.9311
0.658
0.85
0.9589
0.6432
0.7159
0.7634
0.8106
0.8526
0.9188
0.878
0.8431
0.8424
0.924
0.5823
0.9699
0.7885
0.6602
0.0513
0.416
0.0177
0.0609
0.8852
0.8463
14.9733
0.8474
0.9099
12.2264
0.7588
0.6921
3.1649
23.7422
9.8925
0.0989
20.9876
LlamaForCausalLM
bfloat16
llama3.1
70.554
64
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
cyberagent/Llama-3.1-70B-Japanese-Instruct-2407
0.4359
0.5823
0.1812
0.1673
0.1655
0.8665
0.246
0.8368
0.752
0.1899
0.7083
0.0989
0.2384
0.8693
14.3716
0.907
0.9579
18.4078
0.888
0.1812
0.9181
0.6839
0.7514
0.9106
0.196
0.0085
0.6878
0.7879
0.8492
0.7083
0.8787
0.862
0.7708
0.246
0.5823
0.9699
0.3225
0.1352
0.0024
0.0102
0
0.0009
0.823
0.7949
10.767
0.8057
0.9029
11.2189
0.7465
0.6921
3.1649
23.7422
9.8925
0.0989
20.9876
LlamaForCausalLM
bfloat16
llama3.1
70.554
64
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
SakanaAI/EvoLLM-JP-A-v1-7B
0.3075
0.0783
0.0602
0.092
0.1096
0.4092
0.18
0.7626
0.5566
0.2941
0.7324
0.1075
0.2852
0.7996
7.8617
0.8364
0.9257
11.172
0.8301
0.0602
0.5496
0.4741
0.3583
0.3655
0.3017
0.1214
0.5329
0.6301
0.7875
0.7324
0.814
0.7803
0.3126
0.18
0.0783
0.2851
0.0977
0.2955
0.0064
0
0
0
0.4537
0.7262
5.4783
0.7133
0.8689
7.0128
0.6707
0.7015
2.2417
31.8526
10.7599
0.1075
26.5604
MistralForCausalLM
bfloat16
apache-2.0
7.242
11
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
SakanaAI/EvoLLM-JP-A-v1-7B
0.4981
0.0783
0.4291
0.2025
0.4799
0.7665
0.662
0.8102
0.6447
0.411
0.8874
0.1075
0.4051
0.8354
9.5638
0.8798
0.9423
14.1638
0.8646
0.4291
0.8469
0.5718
0.6458
0.8686
0.4383
0.4262
0.6828
0.6806
0.6424
0.8874
0.8239
0.7917
0.5839
0.662
0.0783
0.2851
0.5337
0.3898
0.0079
0.3115
0.0354
0.0561
0.6015
0.7786
8.2927
0.7788
0.88
8.3322
0.7178
0.7015
2.2417
31.8526
10.7599
0.1075
26.5604
MistralForCausalLM
bfloat16
apache-2.0
7.242
11
main
4
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
meta-llama/Llama-3.2-3B-Instruct
0.2042
0.0201
0
0.0345
0.1693
0.1582
0.008
0.6778
0.3061
0.1221
0.6936
0.0562
0.0882
0.794
7.0688
0.8148
0.906
9.086
0.7803
0
0.01
0.3851
0.5
0.2118
0.1375
0.0508
0.2342
0.1307
0.2807
0.6936
0.0555
0.0543
0.2526
0.008
0.0201
0.0843
0.2879
0.1406
0
0.0009
0
0
0.1716
0.6874
3.9743
0.6419
0.8229
5.2307
0.4743
0.6484
1.9187
16.1297
5.6101
0.0562
14.0826
LlamaForCausalLM
bfloat16
llama3.2
3.213
638
main
0
False
v1.4.1
v0.6.3.post1
⭕ : instruction-tuned
meta-llama/Llama-3.2-3B-Instruct
0.4111
0.0201
0.403
0.1037
0.4761
0.5994
0.61
0.7681
0.3565
0.2786
0.8506
0.0562
0.2529
0.8285
9.1433
0.86
0.9366
13.7136
0.8465
0.403
0.6528
0.3477
0.4958
0.7551
0.3175
0.3973
0.3414
0.4059
0.1916
0.8506
0.1652
0.1046
0.3903
0.61
0.0201
0.0843
0.5548
0.2655
0.0153
0.1642
0.0265
0.022
0.2904
0.7271
7.1521
0.6776
0.8772
8.3252
0.6881
0.6484
1.9187
16.1297
5.6101
0.0562
14.0826
LlamaForCausalLM
bfloat16
llama3.2
3.213
638
main
4
False
v1.4.1
v0.6.3.post1

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
2,457
Add dataset card

Space using llm-jp/leaderboard-contents 1

Collection including llm-jp/leaderboard-contents