The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ArrowInvalid Message: JSON parse error: Column() changed from object to array in row 0 Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 153, in _generate_tables df = pd.read_json(f, dtype_backend="pyarrow") File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 815, in read_json return json_reader.read() File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1025, in read obj = self._get_object_parser(self.data) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1051, in _get_object_parser obj = FrameParser(json, **kwargs).parse() File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1187, in parse self._parse() File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1403, in _parse ujson_loads(json, precise_float=self.precise_float), dtype=None ValueError: Trailing data During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2643, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1659, in _head return _examples_to_batch(list(self.take(n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1816, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1347, in __iter__ for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 318, in __iter__ for key, pa_table in self.generate_tables_fn(**gen_kwags): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 156, in _generate_tables raise e File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 130, in _generate_tables pa_table = paj.read_json( File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Training Data for Text Embedding Models
This repository contains raw datasets, all of which have also been formatted for easy training in the Embedding Model Datasets collection. We recommend looking there first.
This repository contains training files to train text embedding models, e.g. using sentence-transformers.
Data Format
All files are in a jsonl.gz
format: Each line contains a JSON-object that represent one training example.
The JSON objects can come in different formats:
- Pairs:
["text1", "text2"]
- This is a positive pair that should be close in vector space. - Triplets:
["anchor", "positive", "negative"]
- This is a triplet: Thepositive
text should be close to theanchor
, while thenegative
text should be distant to theanchor
. - Sets:
{"set": ["text1", "text2", ...]}
A set of texts describing the same thing, e.g. different paraphrases of the same question, different captions for the same image. Any combination of the elements is considered as a positive pair. - Query-Pairs:
{"query": "text", "pos": ["text1", "text2", ...]}
A query together with a set of positive texts. Can be formed to a pair["query", "positive"]
by randomly selecting a text frompos
. - Query-Triplets:
{"query": "text", "pos": ["text1", "text2", ...], "neg": ["text1", "text2", ...]}
A query together with a set of positive texts and negative texts. Can be formed to a triplet["query", "positive", "negative"]
by randomly selecting a text frompos
andneg
.
Available Datasets
Note: I'm currently in the process to upload the files. Please check again next week to get the full list of datasets
We measure the performance for each training dataset by training the nreimers/MiniLM-L6-H384-uncased model on it with MultipleNegativesRankingLoss, a batch size of 256, for 2000 training steps. The performance is then averaged across 14 sentence embedding benchmark datasets from diverse domains (Reddit, Twitter, News, Publications, E-Mails, ...).
Dataset | Description | Size (#Lines) | Performance | Reference |
---|---|---|---|---|
gooaq_pairs.jsonl.gz | (Question, Answer)-Pairs from Google auto suggest | 3,012,496 | 59.06 | GooAQ |
yahoo_answers_title_answer.jsonl.gz | (Title, Answer) pairs from Yahoo Answers | 1,198,260 | 58.65 | Yahoo Answers |
msmarco-triplets.jsonl.gz | (Question, Answer, Negative)-Triplets from MS MARCO Passages dataset | 499,184 | 58.76 | MS MARCO Passages |
stackexchange_duplicate_questions_title_title.jsonl.gz | (Title, Title) pairs of duplicate questions from StackExchange | 304,525 | 58.47 | Stack Exchange Data API |
eli5_question_answer.jsonl.gz | (Question, Answer)-Pairs from ELI5 dataset | 325,475 | 58.24 | ELI5 |
yahoo_answers_title_question.jsonl.gz | (Title, Question_Body) pairs from Yahoo Answers | 659,896 | 58.05 | Yahoo Answers |
squad_pairs.jsonl.gz | (Question, Answer_Passage) Pairs from SQuAD dataset | 87,599 | 58.02 | SQuAD |
yahoo_answers_question_answer.jsonl.gz | (Question_Body, Answer) pairs from Yahoo Answers | 681,164 | 57.74 | Yahoo Answers |
wikihow.jsonl.gz | (Summary, Text) from WikiHow | 128,542 | 57.67 | WikiHow |
amazon_review_2018.jsonl.gz | (Title, review) pairs from Amazon | 87,877,725 | 57.65 | Amazon review data (2018) |
NQ-train_pairs.jsonl.gz | Training pairs (query, answer_passage) from the NQ dataset | 100,231 | 57.48 | Natural Questions |
amazon-qa.jsonl.gz | (Question, Answer) pairs from Amazon | 1,095,290 | 57.48 | AmazonQA |
S2ORC_title_abstract.jsonl.gz | (Title, Abstract) pairs of scientific papers | 41,769,185 | 57.39 | S2ORC |
quora_duplicates.jsonl.gz | Duplicate question pairs from Quora | 103,663 | 57.36 | QQP |
WikiAnswers.jsonl.gz | Sets of duplicates questions | 27,383,151 | 57.34 | WikiAnswers Corpus |
searchQA_top5_snippets.jsonl.gz | Question + Top5 text snippets from SearchQA dataset. Top5 | 117,220 | 57.34 | search_qa |
stackexchange_duplicate_questions_title-body_title-body.jsonl.gz | (Title+Body, Title+Body) pairs of duplicate questions from StackExchange | 250,460 | 57.30 | Stack Exchange Data API |
S2ORC_citations_titles.jsonl.gz | Citation network (paper titles) | 51,030,086 | 57.28 | S2ORC |
stackexchange_duplicate_questions_body_body.jsonl.gz | (Body, Body) pairs of duplicate questions from StackExchange | 250,519 | 57.26 | Stack Exchange Data API |
agnews.jsonl.gz | (Title, Description) pairs of news articles from the AG News dataset | 1,157,745 | 57.25 | AG news corpus |
quora_duplicates_triplets.jsonl.gz | Duplicate question pairs from Quora with additional hard negatives (mined & denoised by cross-encoder) | 101,762 | 56.97 | QQP |
AllNLI.jsonl.gz | Combination of SNLI + MultiNLI Triplets: (Anchor, Entailment_Text, Contradiction_Text) | 277,230 | 56.57 | SNLI and MNLI |
npr.jsonl.gz | (Title, Body) pairs from the npr.org website | 594,384 | 56.44 | Pushshift |
specter_train_triples.jsonl.gz | Triplets (Title, related_title, hard_negative) for Scientific Publications from Specter | 684,100 | 56.32 | SPECTER |
SimpleWiki.jsonl.gz | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 102,225 | 56.15 | SimpleWiki |
PAQ_pairs.jsonl.gz | Training pairs (query, answer_passage) from the PAQ dataset | 64,371,441 | 56.11 | PAQ |
altlex.jsonl.gz | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 112,696 | 55.95 | altlex |
ccnews_title_text.jsonl.gz | (Title, article) pairs from the CC News dataset | 614,664 | 55.84 | CC-News |
codesearchnet.jsonl.gz | CodeSearchNet corpus is a dataset of (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages. | 1,151,414 | 55.80 | CodeSearchNet |
S2ORC_citations_abstracts.jsonl.gz | Citation network (paper abstracts) | 39,567,485 | 55.74 | S2ORC |
sentence-compression.jsonl.gz | Pairs (long_text, short_text) about sentence-compression | 180,000 | 55.63 | Sentence-Compression |
TriviaQA_pairs.jsonl.gz | Pairs (query, answer) from TriviaQA dataset | 73,346 | 55.56 | TriviaQA |
cnn_dailymail_splitted.jsonl.gz | (article, highlight sentence) with individual highlight sentences for each news article | 311,971 | 55.36 | CNN Dailymail Dataset |
cnn_dailymail.jsonl.gz | (highlight sentences, article) with all highlight sentences as one text for each news article | 311,971 | 55.27 | CNN Dailymail Dataset |
flickr30k_captions.jsonl.gz | Different captions for the same image from the Flickr30k dataset | 31,783 | 54.68 | Flickr30k |
xsum.jsonl.gz | (Summary, News Article) pairs from XSUM dataset | 226,711 | 53.86 | xsum |
coco_captions.jsonl.gz | Different captions for the same image | 82,783 | 53.77 | COCO |
Disclaimer: We only distribute these datasets in a specific format, but we do not vouch for their quality or fairness, or claim that you have license to use the dataset. It remains the user's responsibility to determine whether you as a user have permission to use the dataset under the dataset's license and to cite the right owner of the dataset. Please check the individual dataset webpages for the license agreements.
If you're a dataset owner and wish to update any part of it, or do not want your dataset to be included in this dataset collection, feel free to contact me.
- Downloads last month
- 696