mrfakename's picture
Update README.md
e167710
|
raw
history blame
2.05 kB
metadata
license: cc-by-sa-3.0
license_name: cc-by-sa
configs:
  - config_name: en
    data_files: en.json
    default: true
  - config_name: ca
    data_files: ca.json
  - config_name: de
    data_files: de.json
  - config_name: es
    data_files: es.json
  - config_name: el
    data_files: el.json
  - config_name: fa
    data_files: fa.json
  - config_name: fi
    data_files: fi.json
  - config_name: fr
    data_files: fr.json
  - config_name: it
    data_files: it.json
  - config_name: pl
    data_files: pl.json
  - config_name: pt
    data_files: pt.json
  - config_name: ru
    data_files: ru.json
  - config_name: sv
    data_files: sv.json
  - config_name: ua
    data_files: ua.json
  - config_name: zh
    data_files: zh.json

by @mrfakename

~10k items from each language.

Only for training StyleTTS 2-related open source models.

processed using: https://huggingface.co/styletts2-community/data-preprocessing-scripts (styletts2 members only)

License + Credits

Source data comes from Wikipedia and is licensed under CC-BY-SA 3.0. This dataset is licensed under CC-BY-SA 3.0.

Processing

We utilized the following process to preprocess the dataset:

  1. Download data from Wikipedia by language, selecting only the first Parquet file and naming it with the language code
  2. Process using Data Preprocessing Scripts (StyleTTS 2 Community members only) and modify the code to work with the language
  3. Script: Clean the text
  4. Script: Remove ultra-short phrases
  5. Script: Phonemize
  6. Script: Save JSON
  7. Upload dataset

Note

East-asian languages are experimental and in beta. We do not distinguish between chinese traditional and simplified, the dataset consists mainly of simplified chinese. We recommend converting characters to simplified chinese during inference using a library such as hanziconv or chinese-converter.