Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -4,12 +4,20 @@ task_categories:
|
|
4 |
- text-generation
|
5 |
language:
|
6 |
- ru
|
|
|
|
|
7 |
---
|
8 |
|
9 |
# ParaDetox: Text Detoxification with Parallel Data (Russian)
|
10 |
|
11 |
This repository contains information about Russian Paradetox dataset -- the first parallel corpus for the detoxification task -- as well as models for the detoxification of Russian texts.
|
12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
## ParaDetox Collection Pipeline
|
14 |
|
15 |
The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps:
|
@@ -20,7 +28,7 @@ The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yan
|
|
20 |
All these steps were done to ensure high quality of the data and make the process of collection automated. For more details please refer to the original paper.
|
21 |
|
22 |
## Detoxification model
|
23 |
-
|
24 |
|
25 |
You can also check out our [demo](https://detoxifier.nlp.zhores.net/junction/) and telegram [bot](https://t.me/rudetoxifierbot).
|
26 |
|
@@ -33,6 +41,26 @@ You can also check out our [demo](https://detoxifier.nlp.zhores.net/junction/) a
|
|
33 |
}
|
34 |
```
|
35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
## Contacts
|
37 |
|
38 |
If you find some issue, do not hesitate to add it to [Github Issues](https://github.com/s-nlp/russe_detox_2022).
|
|
|
4 |
- text-generation
|
5 |
language:
|
6 |
- ru
|
7 |
+
size_categories:
|
8 |
+
- 1K<n<10K
|
9 |
---
|
10 |
|
11 |
# ParaDetox: Text Detoxification with Parallel Data (Russian)
|
12 |
|
13 |
This repository contains information about Russian Paradetox dataset -- the first parallel corpus for the detoxification task -- as well as models for the detoxification of Russian texts.
|
14 |
|
15 |
+
📰 **Updates**
|
16 |
+
|
17 |
+
**[2024]** [Multilingual TextDetox](https://huggingface.co/textdetox) shared task at CLEF 2024 that covers 9 languages!
|
18 |
+
|
19 |
+
**[2022]** The first work on [ParaDetox](https://huggingface.co/datasets/s-nlp/paradetox) for English was presented at ACL 2022!
|
20 |
+
|
21 |
## ParaDetox Collection Pipeline
|
22 |
|
23 |
The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps:
|
|
|
28 |
All these steps were done to ensure high quality of the data and make the process of collection automated. For more details please refer to the original paper.
|
29 |
|
30 |
## Detoxification model
|
31 |
+
At-that0time SOTA for the detoxification task in Russian -- ruT5 (base) model trained on Russian ParaDetox dataset -- we released online in HuggingFace🤗 repository [here](https://huggingface.co/s-nlp/ruT5-base-detox).
|
32 |
|
33 |
You can also check out our [demo](https://detoxifier.nlp.zhores.net/junction/) and telegram [bot](https://t.me/rudetoxifierbot).
|
34 |
|
|
|
41 |
}
|
42 |
```
|
43 |
|
44 |
+
```
|
45 |
+
@inproceedings{dementieva-etal-2024-multiparadetox,
|
46 |
+
title = "{M}ulti{P}ara{D}etox: Extending Text Detoxification with Parallel Data to New Languages",
|
47 |
+
author = "Dementieva, Daryna and
|
48 |
+
Babakov, Nikolay and
|
49 |
+
Panchenko, Alexander",
|
50 |
+
editor = "Duh, Kevin and
|
51 |
+
Gomez, Helena and
|
52 |
+
Bethard, Steven",
|
53 |
+
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
|
54 |
+
month = jun,
|
55 |
+
year = "2024",
|
56 |
+
address = "Mexico City, Mexico",
|
57 |
+
publisher = "Association for Computational Linguistics",
|
58 |
+
url = "https://aclanthology.org/2024.naacl-short.12",
|
59 |
+
pages = "124--140",
|
60 |
+
abstract = "Text detoxification is a textual style transfer (TST) task where a text is paraphrased from a toxic surface form, e.g. featuring rude words, to the neutral register. Recently, text detoxification methods found their applications in various task such as detoxification of Large Language Models (LLMs) (Leong et al., 2023; He et al., 2024; Tang et al., 2023) and toxic speech combating in social networks (Deng et al., 2023; Mun et al., 2023; Agarwal et al., 2023). All these applications are extremely important to ensure safe communication in modern digital worlds. However, the previous approaches for parallel text detoxification corpora collection{---}ParaDetox (Logacheva et al., 2022) and APPADIA (Atwell et al., 2022){---}were explored only in monolingual setup. In this work, we aim to extend ParaDetox pipeline to multiple languages presenting MultiParaDetox to automate parallel detoxification corpus collection for potentially any language. Then, we experiment with different text detoxification models{---}from unsupervised baselines to LLMs and fine-tuned models on the presented parallel corpora{---}showing the great benefit of parallel corpus presence to obtain state-of-the-art text detoxification models for any language.",
|
61 |
+
}
|
62 |
+
```
|
63 |
+
|
64 |
## Contacts
|
65 |
|
66 |
If you find some issue, do not hesitate to add it to [Github Issues](https://github.com/s-nlp/russe_detox_2022).
|