Using language as a basis for spliting datasets

#3
by julyxia - opened

Can you divide the dataset by language? Similar to https://huggingface.co/datasets/facebook/voxpopuli . In fact, we prefer to download minority languages.

Amphion org

Thank you for your attention to Emilia, actually we have divided the dataset by languages. Click "Files and versions" and you may have subfolders for each languages including, EN, ZH, DE, FR, JP, KO

Thanks for your reply. I hope to run load_dataset("amphion/Emilia-Dataset", languages=['de']) and download the de data instead of the full data by specifying the LID in the languages ​​parameter.

Amphion org

Thanks for your suggestion. I think you can use this feature as introduced in HuggingFace docs to load specific language data.

E.g.

>>> from datasets import load_dataset

>>> path = "DE/*.tar"
>>> dataset = load_dataset("amphion/Emilia-Dataset", data_files={"de": path}, split="de", streaming=True)

We are planning to add this to our README.md. Please let us know if this works :)

Amphion org

This is my test; it looks like it is working since we do have 90 DE tar files.

image.png

Thank you very much and best wish to you

yuantuo666 changed discussion status to closed

Sign up or log in to comment