Datasets:
File size: 3,053 Bytes
64ca842 91e77ef 5f3af00 64ca842 91e77ef 352c6f5 c464934 352c6f5 a11d71e 352c6f5 c464934 a11d71e c464934 a11d71e c464934 a11d71e c464934 a11d71e c464934 73837ba a11d71e 352c6f5 c464934 a11d71e c464934 a11d71e c464934 a11d71e c464934 a11d71e c464934 a11d71e 352c6f5 c464934 73837ba c464934 91e77ef 5f3af00 56def96 1384da5 5f3af00 56acbda 5f3af00 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
---
language:
- en
- uk
- ru
- de
- zh
- am
- ar
- hi
- es
license: openrail++
size_categories:
- 1K<n<10K
task_categories:
- text-generation
dataset_info:
features:
- name: toxic_sentence
dtype: string
- name: neutral_sentence
dtype: string
splits:
- name: en
num_bytes: 47435
num_examples: 400
- name: ru
num_bytes: 89453
num_examples: 400
- name: uk
num_bytes: 78106
num_examples: 400
- name: de
num_bytes: 86818
num_examples: 400
- name: es
num_bytes: 56868
num_examples: 400
- name: am
num_bytes: 133489
num_examples: 400
- name: zh
num_bytes: 79089
num_examples: 400
- name: ar
num_bytes: 85237
num_examples: 400
- name: hi
num_bytes: 107518
num_examples: 400
download_size: 489288
dataset_size: 764013
configs:
- config_name: default
data_files:
- split: en
path: data/en-*
- split: ru
path: data/ru-*
- split: uk
path: data/uk-*
- split: de
path: data/de-*
- split: es
path: data/es-*
- split: am
path: data/am-*
- split: zh
path: data/zh-*
- split: ar
path: data/ar-*
- split: hi
path: data/hi-*
---
**MultiParaDetox**
This is the multilingual parallel dataset for text detoxification prepared for [CLEF TextDetox 2024](https://pan.webis.de/clef24/pan24-web/text-detoxification.html) shared task.
For each of 9 languages, we collected 1k pairs of toxic<->detoxified instances splitted into two parts: dev (400 pairs) and test (600 pairs).
**!!!April, 23rd, update: We are realsing the parallel dev set! The test part for the final phase of the competition is available [here](https://huggingface.co/datasets/textdetox/multilingual_paradetox_test)!!!**
The list of the sources for the original toxic sentences:
* English: [Jigsaw](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge), [Unitary AI Toxicity Dataset](https://github.com/unitaryai/detoxify)
* Russian: [Russian Language Toxic Comments](https://www.kaggle.com/datasets/blackmoon/russian-language-toxic-comments), [Toxic Russian Comments](https://www.kaggle.com/datasets/alexandersemiletov/toxic-russian-comments)
* Ukrainian: [Ukrainian Twitter texts](https://github.com/saganoren/ukr-twi-corpus)
* Spanish: [Detecting and Monitoring Hate Speech in Twitter](https://www.mdpi.com/1424-8220/19/21/4654), [Detoxis](https://rdcu.be/dwhxH), [RoBERTuito: a pre-trained language model for social media text in Spanish](https://aclanthology.org/2022.lrec-1.785/)
* German: [GemEval 2018, 2021](https://aclanthology.org/2021.germeval-1.1/)
* Amhairc: [Amharic Hate Speech](https://github.com/uhh-lt/AmharicHateSpeech)
* Arabic: [OSACT4](https://edinburghnlp.inf.ed.ac.uk/workshops/OSACT4/)
* Hindi: [Hostility Detection Dataset in Hindi](https://competitions.codalab.org/competitions/26654#learn_the_details-dataset), [Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages](https://dl.acm.org/doi/pdf/10.1145/3368567.3368584?download=true) |