Datasets:

Modalities:
Text
Formats:
csv
Languages:
Russian
Libraries:
Datasets
pandas
License:
ru_paradetox / README.md
dardem's picture
Update README.md
dd0f6ea verified
metadata
license: openrail++
task_categories:
  - text-generation
language:
  - ru
size_categories:
  - 1K<n<10K

ParaDetox: Text Detoxification with Parallel Data (Russian)

This repository contains information about Russian Paradetox dataset -- the first parallel corpus for the detoxification task -- as well as models for the detoxification of Russian texts.

📰 Updates

[2024] Multilingual TextDetox shared task at CLEF 2024 that covers 9 languages!

[2022] The first work on ParaDetox for English was presented at ACL 2022!

ParaDetox Collection Pipeline

The ParaDetox Dataset collection was done via Yandex.Toloka crowdsource platform. The collection was done in three steps:

  • Task 1: Generation of Paraphrases: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content.
  • Task 2: Content Preservation Check: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings.
  • Task 3: Toxicity Check: Finally, we check if the workers succeeded in removing toxicity.

All these steps were done to ensure high quality of the data and make the process of collection automated. For more details please refer to the original paper.

Detoxification model

At-that-time SOTA for the detoxification task in Russian -- ruT5 (base) model fine-tuned on the Russian ParaDetox dataset -- we released online in HuggingFace🤗 repository here.

You can also check out our demo and telegram bot.

Citation

@article{dementievarusse,
  title={RUSSE-2022: Findings of the First Russian Detoxification Shared Task Based on Parallel Corpora},
  author={Dementieva, Daryna and Logacheva, Varvara and Nikishina, Irina and Fenogenova, Alena and Dale, David and Krotova, Irina and Semenov, Nikita and Shavrina, Tatiana and Panchenko, Alexander}
}

and

@inproceedings{dementieva-etal-2024-multiparadetox,
    title = "{M}ulti{P}ara{D}etox: Extending Text Detoxification with Parallel Data to New Languages",
    author = "Dementieva, Daryna  and
      Babakov, Nikolay  and
      Panchenko, Alexander",
    editor = "Duh, Kevin  and
      Gomez, Helena  and
      Bethard, Steven",
    booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
    month = jun,
    year = "2024",
    address = "Mexico City, Mexico",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.naacl-short.12",
    pages = "124--140",
    abstract = "Text detoxification is a textual style transfer (TST) task where a text is paraphrased from a toxic surface form, e.g. featuring rude words, to the neutral register. Recently, text detoxification methods found their applications in various task such as detoxification of Large Language Models (LLMs) (Leong et al., 2023; He et al., 2024; Tang et al., 2023) and toxic speech combating in social networks (Deng et al., 2023; Mun et al., 2023; Agarwal et al., 2023). All these applications are extremely important to ensure safe communication in modern digital worlds. However, the previous approaches for parallel text detoxification corpora collection{---}ParaDetox (Logacheva et al., 2022) and APPADIA (Atwell et al., 2022){---}were explored only in monolingual setup. In this work, we aim to extend ParaDetox pipeline to multiple languages presenting MultiParaDetox to automate parallel detoxification corpus collection for potentially any language. Then, we experiment with different text detoxification models{---}from unsupervised baselines to LLMs and fine-tuned models on the presented parallel corpora{---}showing the great benefit of parallel corpus presence to obtain state-of-the-art text detoxification models for any language.",
}

Contacts

If you find some issue, do not hesitate to add it to Github Issues.

For any questions, please contact: Daryna Dementieva ([email protected])