File size: 1,664 Bytes
24a0342 4fb45b9 24a0342 d641ef9 24a0342 6ebd395 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
# Filtered WIT, an Image-Text Dataset.
A reliable Dataset to run Image-Text models.
You can find WIT, Wikipedia Image Text Dataset, [here](https://github.com/google-research-datasets/wit)
Data was taken from [dalle-mini/wit](https://huggingface.co/datasets/dalle-mini/wit)
## Author
- [Aarush Katta](https://github.com/ARKseal)
## Data Structure
The data is stored as tars, containing 10,000 samples per tar.
The parquets contain the metadata of each tar, which was crated using [this script](https://huggingface.co/datasets/laion/filtered-wit/blob/main/wit_create_meta.py)
Each tar contains a `.jpg`, `.txt`, and `.json`.
The image is stored in `.jpg`, the caption in `.txt.` and the metadata in `.json`
The preferred method to read the data is [WebDataset](https://github.com/webdataset/webdataset)
Here's an example:
```python
import webdataset as wds
dataset = wds.WebDataset('data/00000.tar').to_tuple('txt', 'jpg', 'json')
for text, image, meta in dataset:
print(
text[:50],
image[:50],
meta[:50]
)
```
## Filteration
Each sample has 8 possible captions which were compared to the image using [CLIP ViT-B32](https://arxiv.org/abs/2103.00020)
The text was encoded using [multilingual CLIP text encoder](https://huggingface.co/sentence-transformers/clip-ViT-B-32-multilingual-v1)
Each possible caption was compared to the encoded image using Cosine Similarity
and kept if the sim was greater than `0.26`
Then the new caption was the filtered captions concatenated, and samples with no filtered caption were dropped.
The script used is [filter_wit.py](https://huggingface.co/datasets/laion/filtered-wit/blob/main/filter_wit.py)
|