dataset_name_MTEB
stringlengths
2
38
dataset_name_HF
stringlengths
7
48
text_leaks_train_wrt_test
stringlengths
1
3
text_leaks_train_wrt_test_%
stringlengths
2
5
text_leaks_valid_wrt_test
stringclasses
8 values
text_leaks_valid_wrt_test_%
stringclasses
9 values
text_duplication_train
stringlengths
1
4
text_duplication_val
stringclasses
6 values
text_duplication_test
stringclasses
11 values
text_test_biased
stringlengths
2
6
text_and_label_leaks_train_wrt_test
stringclasses
10 values
text_and_label_leaks_train_wrt_test_%
stringclasses
12 values
text_and_label_leaks_valid_wrt_test
stringclasses
7 values
text_and_label_leaks_valid_wrt_test_%
stringclasses
8 values
text_and_label_duplication_train
stringlengths
1
4
text_and_label_duplication_val
stringclasses
6 values
text_and_label_duplication_test
stringclasses
11 values
text_and_label_test_biased
stringlengths
2
6
difference_annotation
stringclasses
7 values
AmazonCounterfactualClassification
mteb/amazon_counterfactual
0
0.0
0
0.0
0
0
0
0.0%
0
0.0
0
0.0
0
0
0
0.0%
0
AmazonPolarityClassification
mteb/amazon_polarity
0
0.0
NR
NR
0
NR
0
0.0%
0
0.0
NR
NR
0
NR
0
0.0%
0
AmazonReviewsClassification
mteb/amazon_reviews_multi_fr
2
0.001
0
0.0
109
0
0
0.04%
2
0.001
0
0.0
106
0
0
0.04%
0
Banking77Classification
mteb/banking77
0
0.0
NR
NR
0
NR
0
0.0%
0
0.0
NR
NR
0
NR
0
0.0%
0
EmotionClassification
mteb/emotion
11
0.069
3
0.15
31
2
0
0.7%
0
0.0
0
0.0
1
0
0
0.0%
14
ImdbClassification
mteb/imdb
123
0.492
NR
NR
96
NR
199
1.288%
123
0.492
NR
NR
96
NR
199
1.288%
0
MassiveIntentClassification
mteb/amazon_massive_intent
21
0.182
5
0.246
46
2
4
1.009%
19
0.165
5
0.246
42
2
4
0.941%
2
MassiveScenarioClassification
mteb/amazon_massive_scenario
21
0.182
5
0.246
46
2
4
1.009%
19
0.165
5
0.246
42
2
4
0.941%
2
MTOPDomainClassification
mteb/mtop_domain
15
0.096
4
0.179
33
0
2
0.479%
15
0.096
4
0.179
33
0
2
0.479%
0
MTOPIntentClassification
mteb/mtop_intent
15
0.096
4
0.179
33
0
2
0.479%
15
0.096
4
0.179
33
0
2
0.479%
0
ToxicConversationsClassification
mteb/toxic_conversations_50k
107
0.214
NR
NR
103
NR
104
0.422%
107
0.214
NR
NR
100
NR
103
0.42%
0
TweetSentimentExtractionClassification
mteb/tweet_sentiment_extraction
0
0.0
NR
NR
0
NR
0
0.0%
0
0.0
NR
NR
0
NR
0
0.0%
0
ArxivClusteringP2P
mteb/arxiv-clustering-p2p
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
ArxivClusteringS2S
mteb/arxiv-clustering-s2s
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
BiorxivClusteringP2P
mteb/biorxiv-clustering-p2p
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
BiorxivClusteringS2S
mteb/biorxiv-clustering-s2s
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
MedrxivClusteringP2P
mteb/medrxiv-clustering-p2p
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
MedrxivClusteringS2S
mteb/medrxiv-clustering-s2s
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
RedditClustering
mteb/reddit-clustering
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
RedditClusteringP2P
mteb/reddit-clustering-p2p
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
StackExchangeClustering
mteb/stackexchange-clustering
NR
NR
0
0.0
NR
0
0
0.0%
NR
NR
0
0.0
NR
0
0
0.0%
0
StackExchangeClusteringP2P
mteb/stackexchange-clustering-p2p
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
TwentyNewsgroupsClustering
mteb/twentynewsgroups-clustering
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
SprintDuplicateQuestions
mteb/sprintduplicatequestions-pairclassification
NR
NR
1
0.001
NR
0
0
0.001%
NR
NR
1
0.001
NR
0
0
0.001%
0
TwitterSemEval2015
mteb/twittersemeval2015-pairclassification
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
TwitterURLCorpus
mteb/twitterurlcorpus-pairclassification
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
AskUbuntuDupQuestions
mteb/askubuntudupquestions-reranking
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
MindSmallReranking
mteb/mind_small
8
0.005
NR
NR
863
NR
342
0.493%
0
0.0
NR
NR
92
NR
63
0.089%
8
SciDocsRR
mteb/scidocs-reranking
NR
NR
0
0.0
NR
0
0
0.0%
NR
NR
0
0.0
NR
0
0
0.0%
0
StackOverflowDupQuestions
mteb/stackoverflowdupquestions-reranking
6
0.03
NR
NR
20
NR
0
0.201%
1
0.005
NR
NR
4
NR
0
0.033%
5
ArguAna
mteb/arguana
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
ClimateFEVER
mteb/climate-fever
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
CQADupstackAndroidRetrieval
mteb/cqadupstack-android
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
CQADupstackEnglishRetrieval
mteb/cqadupstack-english
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
CQADupstackGamingRetrieval
mteb/cqadupstack-gaming
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
CQADupstackGisRetrieval
mteb/cqadupstack-gis
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
CQADupstackMathematicaRetrieval
mteb/cqadupstack-mathematica
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
CQADupstackPhysicsRetrieval
mteb/cqadupstack-physics
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
CQADupstackProgrammersRetrieval
mteb/cqadupstack-programmers
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
CQADupstackStatsRetrieval
mteb/cqadupstack-stats
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
CQADupstackTexRetrieval
mteb/cqadupstack-tex
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
CQADupstackUnixRetrieval
mteb/cqadupstack-unix
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
CQADupstackWebmastersRetrieval
mteb/cqadupstack-webmasters
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
CQADupstackProgrammersRetrieval
mteb/cqadupstack-wordpress
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
DBPedia
mteb/dbpedia
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
FEVER
mteb/fever
1
0.001
0
0.0
8958
176
93
1.184%
1
0.001
0
0.0
8958
176
93
1.184%
0
FIQA2018
mteb/fiqa
0
0.0
0
0.0
1
0
0
0.0%
0
0.0
0
0.0
1
0
0
0.0%
0
HotpotQA
mteb/hotpotqa
0
0.0
0
0.0
10
0
0
0.0%
0
0.0
0
0.0
10
0
0
0.0%
0
MSMARCO
mteb/msmarco
0
0.0
0
0.0
0
0
1
0.011%
0
0.0
0
0.0
0
0
1
0.011%
0
NFCorpus
mteb/nfcorpus
49
0.044
4
0.035
599
54
46
0.803%
49
0.044
4
0.035
599
54
46
0.803%
0
NQ
mteb/nq
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
SCIDOCS
mteb/scidocs
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
SciFact
mteb/scifact
2
0.218
NR
NR
3
NR
0
0.59%
2
0.218
NR
NR
3
NR
0
0.59%
0
Touche2020
mteb/touche2020
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
TRECCOVID
mteb/trec-covid
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
BIOSSES
mteb/biosses-sts
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
SICK-R
mteb/sickr-sts
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
STS12
mteb/sts12-sts
0
0.0
NR
NR
12
NR
206
6.628%
0
0.0
NR
NR
11
NR
196
6.306%
0
STS13
mteb/sts13-sts
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
STS14
mteb/sts14-sts
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
STS15
mteb/sts15-sts
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
STS16
mteb/sts16-sts
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
STS17
mteb/sts17-sts
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
STSBenchmarkMultilingualSTS
mteb/stsb_multi_mt
7
0.122
2
0.133
43
2
1
0.725%
1
0.017
2
0.133
39
2
1
0.29%
6
STS22
mteb/sts22-crosslingual-sts
0
0.0
NR
NR
20
NR
0
0.0%
0
0.0
NR
NR
13
NR
0
0.0%
0
SummEval
mteb/summeval
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK
OK

LLE MTEB

This dataset lists the presence or absence of leaks and duplicate data in the datasets constituting the MTEB leaderboard (EN & FR).

For more information concerning the methodology and find out what the column names correspond to, please consult the following blog post.
To keep things simple, we invite the reader to read the percentages indicated in the text_and_label_test_biased column, which correspond to the proportion of biased data in the test split of the dataset in question.
Rows containing "OK" are datasets containing only one test split. In the absence of train or validation splits, there can be no leaks.

MTEB EN

For the English part, we evaluated the quality of all the datasets present in the file run_mteb_english.
We can observe that 24% of MTEB EN datasets contain leaks (up to 6.3% of the test split).

MTEB FR

For the French part, we evaluated the quality of all the datasets present in the file run_mteb_french.
Note: we were unable to download the datasets for the XPQARetrieval (jinaai/xpqa) and MintakaRetrieval (jinaai/mintakaqa) tasks due to encoding problems. We therefore used the original Amazon datasets available on Github. There may well be a difference between what's on MTEB and what's on Github. So, in the following we give results without taking these datasets into account (24 datasets instead of 26) although the reader can find in this dataset, the results we get with datasets coming from GitHub.
We can observe that 46% of MTEB FR datasets contain leaks (indicative figure until the 7 missing datasets can be evaluated).

Global

It should be noted that the percentages reported are individual evaluations of the datasets. Biases may be greater than this in reality.
Indeed, if you concatenate datasets (for example, all the train splits available for the STS task in a given language), a data in the train split of dataset A may not be present in the test split of A, but may be present in the test split of dataset B, thus creating a leak. The same logic applies to duplicate data.

We therefore invite users to take care when training their models (and even to avoid using train splits from all the datasets listed here as having leaks).
We have also reached out to the MTEB maintainers who are currently looking into cleaning up their leaderboards to maintain users' confidence in their tool for evaluating or choosing a model for their practical case.

Downloads last month
42
Edit dataset card