|
--- |
|
language: |
|
- en |
|
- uk |
|
- ru |
|
- de |
|
- zh |
|
- am |
|
- ar |
|
- hi |
|
- es |
|
license: openrail++ |
|
size_categories: |
|
- 1K<n<10K |
|
task_categories: |
|
- text-generation |
|
dataset_info: |
|
features: |
|
- name: toxic_sentence |
|
dtype: string |
|
splits: |
|
- name: en |
|
num_bytes: 24945 |
|
num_examples: 400 |
|
- name: ru |
|
num_bytes: 48249 |
|
num_examples: 400 |
|
- name: uk |
|
num_bytes: 40226 |
|
num_examples: 400 |
|
- name: de |
|
num_bytes: 44940 |
|
num_examples: 400 |
|
- name: es |
|
num_bytes: 30159 |
|
num_examples: 400 |
|
- name: am |
|
num_bytes: 72606 |
|
num_examples: 400 |
|
- name: zh |
|
num_bytes: 36219 |
|
num_examples: 400 |
|
- name: ar |
|
num_bytes: 44668 |
|
num_examples: 400 |
|
- name: hi |
|
num_bytes: 57291 |
|
num_examples: 400 |
|
download_size: 257617 |
|
dataset_size: 399303 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: en |
|
path: data/en-* |
|
- split: ru |
|
path: data/ru-* |
|
- split: uk |
|
path: data/uk-* |
|
- split: de |
|
path: data/de-* |
|
- split: es |
|
path: data/es-* |
|
- split: am |
|
path: data/am-* |
|
- split: zh |
|
path: data/zh-* |
|
- split: ar |
|
path: data/ar-* |
|
- split: hi |
|
path: data/hi-* |
|
--- |
|
**MultiParaDetox** |
|
|
|
This is the multilingual parallel dataset for text detoxification prepared for [CLEF TextDetox 2024](https://pan.webis.de/clef24/pan24-web/text-detoxification.html) shared task. |
|
For each of 9 languages, we collected 1k pairs of toxic<->detoxified instances splitted into two parts: dev (400 pairs) and test (600 pairs). |
|
|
|
**!!!April, 23rd, update: We are realsing the parallel dev set! The test part for the final phase of the competition is available [here](https://huggingface.co/datasets/textdetox/multilingual_paradetox_test)!!!** |
|
|
|
The list of the sources for the original toxic sentences: |
|
* English: [Jigsaw](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge), [Unitary AI Toxicity Dataset](https://github.com/unitaryai/detoxify) |
|
* Russian: [Russian Language Toxic Comments](https://www.kaggle.com/datasets/blackmoon/russian-language-toxic-comments), [Toxic Russian Comments](https://www.kaggle.com/datasets/alexandersemiletov/toxic-russian-comments) |
|
* Ukrainian: [Ukrainian Twitter texts](https://github.com/saganoren/ukr-twi-corpus) |
|
* Spanish: [Detecting and Monitoring Hate Speech in Twitter](https://www.mdpi.com/1424-8220/19/21/4654), [Detoxis](https://rdcu.be/dwhxH), [RoBERTuito: a pre-trained language model for social media text in Spanish](https://aclanthology.org/2022.lrec-1.785/) |
|
* German: [GemEval 2018, 2021](https://aclanthology.org/2021.germeval-1.1/) |
|
* Amhairc: [Amharic Hate Speech](https://github.com/uhh-lt/AmharicHateSpeech) |
|
* Arabic: [OSACT4](https://edinburghnlp.inf.ed.ac.uk/workshops/OSACT4/) |
|
* Hindi: [Hostility Detection Dataset in Hindi](https://competitions.codalab.org/competitions/26654#learn_the_details-dataset), [Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages](https://dl.acm.org/doi/pdf/10.1145/3368567.3368584?download=true) |