Datasets:
license: mit
language:
- en
paperswithcode_id: embedding-data/flickr30k-captions
pretty_name: flickr30k-captions
Dataset Card for "flickr30k-captions"
Table of Contents
- Dataset Description
- Dataset Structure
- Usage Example
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://shannon.cs.illinois.edu/DenotationGraph/
- Repository: More Information Needed
- Paper: https://transacl.org/ojs/index.php/tacl/article/view/229/33
- Point of Contact: Peter Young, Alice Lai, Micah Hodosh, Julia Hockenmaier
Dataset Summary
We propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics, which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph, i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions.
Disclaimer: The team releasing Flickr30k did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team.
Supported Tasks
- Sentence Transformers training; useful for semantic search and sentence similarity.
Languages
- English.
Dataset Structure
Each example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value":
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
...
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.
Usage Example
Install the 🤗 Datasets library with pip install datasets
and load the dataset from the Hub with:
from datasets import load_dataset
dataset = load_dataset("embedding-data/flickr30k-captions")
The dataset is loaded as a DatasetDict
has the format:
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: 31783
})
})
Review an example i
with:
dataset["train"][i]["set"]
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
Contributions
Thanks to Peter Young, Alice Lai, Micah Hodosh, Julia Hockenmaier for adding this dataset.