Datasets:
Tasks:
Text Generation
Sub-tasks:
language-modeling
Languages:
Italian
Size:
100M<n<1B
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -10,16 +10,7 @@ license:
|
|
10 |
multilinguality:
|
11 |
- monolingual
|
12 |
size_categories:
|
13 |
-
|
14 |
-
- 1M<n<10M
|
15 |
-
small:
|
16 |
-
- 10M<n<100M
|
17 |
-
medium:
|
18 |
-
- 10M<n<100M
|
19 |
-
large:
|
20 |
-
- 10M<n<100M
|
21 |
-
full:
|
22 |
-
- 100M<n<1B
|
23 |
source_datasets:
|
24 |
- extended
|
25 |
task_categories:
|
@@ -58,7 +49,8 @@ pretty_name: mC4_it
|
|
58 |
## Dataset Description
|
59 |
|
60 |
- **Original Homepage:** [HF Hub](https://huggingface.co/datasets/allenai/c4)
|
61 |
-
- **Paper:** [
|
|
|
62 |
|
63 |
### Dataset Summary
|
64 |
|
@@ -172,13 +164,24 @@ AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you
|
|
172 |
If you use this dataset in your work, please cite us and the original mC4 authors as:
|
173 |
|
174 |
```
|
175 |
-
@
|
176 |
-
title={
|
177 |
-
author=
|
178 |
-
|
179 |
-
|
180 |
-
|
181 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
182 |
}
|
183 |
|
184 |
@inproceedings{xue-etal-2021-mt5,
|
@@ -200,8 +203,4 @@ If you use this dataset in your work, please cite us and the original mC4 author
|
|
200 |
doi = "10.18653/v1/2021.naacl-main.41",
|
201 |
pages = "483--498",
|
202 |
}
|
203 |
-
```
|
204 |
-
|
205 |
-
### Contributions
|
206 |
-
|
207 |
-
Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
|
|
10 |
multilinguality:
|
11 |
- monolingual
|
12 |
size_categories:
|
13 |
+
- 100M<n<1B
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
source_datasets:
|
15 |
- extended
|
16 |
task_categories:
|
|
|
49 |
## Dataset Description
|
50 |
|
51 |
- **Original Homepage:** [HF Hub](https://huggingface.co/datasets/allenai/c4)
|
52 |
+
- **Paper:** [ACL Anthology](https://aclanthology.org/2024.lrec-main.823/)
|
53 |
+
- **Preprint:** [Arxiv](https://arxiv.org/abs/2203.03759)
|
54 |
|
55 |
### Dataset Summary
|
56 |
|
|
|
164 |
If you use this dataset in your work, please cite us and the original mC4 authors as:
|
165 |
|
166 |
```
|
167 |
+
@inproceedings{sarti-nissim-2024-it5-text,
|
168 |
+
title = "{IT}5: Text-to-text Pretraining for {I}talian Language Understanding and Generation",
|
169 |
+
author = "Sarti, Gabriele and
|
170 |
+
Nissim, Malvina",
|
171 |
+
editor = "Calzolari, Nicoletta and
|
172 |
+
Kan, Min-Yen and
|
173 |
+
Hoste, Veronique and
|
174 |
+
Lenci, Alessandro and
|
175 |
+
Sakti, Sakriani and
|
176 |
+
Xue, Nianwen",
|
177 |
+
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
|
178 |
+
month = may,
|
179 |
+
year = "2024",
|
180 |
+
address = "Torino, Italy",
|
181 |
+
publisher = "ELRA and ICCL",
|
182 |
+
url = "https://aclanthology.org/2024.lrec-main.823",
|
183 |
+
pages = "9422--9433",
|
184 |
+
abstract = "We introduce IT5, the first family of encoder-decoder transformer models pretrained specifically on Italian. We document and perform a thorough cleaning procedure for a large Italian corpus and use it to pretrain four IT5 model sizes. We then introduce the ItaGen benchmark, which includes a broad range of natural language understanding and generation tasks for Italian, and use it to evaluate the performance of IT5 models and multilingual baselines. We find monolingual IT5 models to provide the best scale-to-performance ratio across tested models, consistently outperforming their multilingual counterparts and setting a new state-of-the-art for Italian language generation.",
|
185 |
}
|
186 |
|
187 |
@inproceedings{xue-etal-2021-mt5,
|
|
|
203 |
doi = "10.18653/v1/2021.naacl-main.41",
|
204 |
pages = "483--498",
|
205 |
}
|
206 |
+
```
|
|
|
|
|
|
|
|