Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
Dask
License:
system HF staff commited on
Commit
546b667
0 Parent(s):

Update files from the datasets library (from 1.5.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.5.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - bg
8
+ - cs
9
+ - da
10
+ - de
11
+ - el
12
+ - en
13
+ - es
14
+ - et
15
+ - fi
16
+ - fr
17
+ - hu
18
+ - it
19
+ - lt
20
+ - lv
21
+ - nl
22
+ - pl
23
+ - pt
24
+ - ro
25
+ - sk
26
+ - sl
27
+ - sv
28
+ licenses:
29
+ - unknown
30
+ multilinguality:
31
+ - translation
32
+ size_categories:
33
+ - 100K<n<1M
34
+ source_datasets:
35
+ - original
36
+ task_categories:
37
+ - other
38
+ task_ids:
39
+ - machine-translation
40
+ ---
41
+
42
+ # Dataset Card for europarl-bilingual
43
+
44
+ ## Table of Contents
45
+ - [Dataset Description](#dataset-description)
46
+ - [Dataset Summary](#dataset-summary)
47
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
48
+ - [Languages](#languages)
49
+ - [Dataset Structure](#dataset-structure)
50
+ - [Data Instances](#data-instances)
51
+ - [Data Fields](#data-instances)
52
+ - [Data Splits](#data-instances)
53
+ - [Dataset Creation](#dataset-creation)
54
+ - [Curation Rationale](#curation-rationale)
55
+ - [Source Data](#source-data)
56
+ - [Annotations](#annotations)
57
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
58
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
59
+ - [Social Impact of Dataset](#social-impact-of-dataset)
60
+ - [Discussion of Biases](#discussion-of-biases)
61
+ - [Other Known Limitations](#other-known-limitations)
62
+ - [Additional Information](#additional-information)
63
+ - [Dataset Curators](#dataset-curators)
64
+ - [Licensing Information](#licensing-information)
65
+ - [Citation Information](#citation-information)
66
+ - [Contributions](#contributions)
67
+
68
+ ## Dataset Description
69
+
70
+ - **Homepage:** [Statmt](http://www.statmt.org/europarl/)
71
+ - **Repository:** [OPUS Europarl](https://opus.nlpl.eu/Europarl.php)
72
+ - **Paper:** [Aclweb](https://www.aclweb.org/anthology/L12-1246/)
73
+ - **Leaderboard:** [Needs More Information]
74
+ - **Point of Contact:** [Needs More Information]
75
+
76
+ ### Dataset Summary
77
+
78
+ A parallel corpus extracted from the European Parliament web site by Philipp Koehn (University of Edinburgh). The main intended use is to aid statistical machine translation research.
79
+
80
+ To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
81
+ You can find the valid pairs in Homepage section of Dataset Description: https://opus.nlpl.eu/Europarl.php
82
+ E.g.
83
+
84
+ `dataset = load_dataset("europarl_bilingual", lang1="fi", lang2="fr")`
85
+
86
+
87
+ ### Supported Tasks and Leaderboards
88
+
89
+ Tasks: Machine Translation, Cross Lingual Word Embeddings (CWLE) Alignment
90
+
91
+ ### Languages
92
+
93
+ - 21 languages, 211 bitexts
94
+ - total number of files: 207,775
95
+ - total number of tokens: 759.05M
96
+ - total number of sentence fragments: 30.32M
97
+
98
+ Every pair of the following languages is available:
99
+ - bg
100
+ - cs
101
+ - da
102
+ - de
103
+ - el
104
+ - en
105
+ - es
106
+ - et
107
+ - fi
108
+ - fr
109
+ - hu
110
+ - it
111
+ - lt
112
+ - lv
113
+ - nl
114
+ - pl
115
+ - pt
116
+ - ro
117
+ - sk
118
+ - sl
119
+ - sv
120
+
121
+
122
+ ## Dataset Structure
123
+
124
+ ### Data Instances
125
+
126
+ Here is an example from the en-fr pair:
127
+ ```
128
+ {
129
+ 'translation': {
130
+ 'en': 'Resumption of the session',
131
+ 'fr': 'Reprise de la session'
132
+ }
133
+ }
134
+ ```
135
+
136
+ ### Data Fields
137
+
138
+ - `translation`: a dictionary containing two strings paired with a key indicating the corresponding language.
139
+
140
+ ### Data Splits
141
+
142
+ - `train`: only train split is provided. Authors did not provide a separation of examples in `train`, `dev` and `test`.
143
+
144
+ ## Dataset Creation
145
+
146
+ ### Curation Rationale
147
+
148
+ [Needs More Information]
149
+
150
+ ### Source Data
151
+
152
+ #### Initial Data Collection and Normalization
153
+
154
+ [Needs More Information]
155
+
156
+ #### Who are the source language producers?
157
+
158
+ [Needs More Information]
159
+
160
+ ### Annotations
161
+
162
+ #### Annotation process
163
+
164
+ [Needs More Information]
165
+
166
+ #### Who are the annotators?
167
+
168
+ [Needs More Information]
169
+
170
+ ### Personal and Sensitive Information
171
+
172
+ [Needs More Information]
173
+
174
+ ## Considerations for Using the Data
175
+
176
+ ### Social Impact of Dataset
177
+
178
+ [Needs More Information]
179
+
180
+ ### Discussion of Biases
181
+
182
+ [Needs More Information]
183
+
184
+ ### Other Known Limitations
185
+
186
+ [Needs More Information]
187
+
188
+ ## Additional Information
189
+
190
+ ### Dataset Curators
191
+
192
+ [Needs More Information]
193
+
194
+ ### Licensing Information
195
+
196
+ The data set comes with the same license
197
+ as the original sources.
198
+ Please, check the information about the source
199
+ that is given on
200
+ http://opus.nlpl.eu/Europarl-v8.php
201
+
202
+ ### Citation Information
203
+
204
+ ```
205
+ @InProceedings{TIEDEMANN12.463,
206
+ author = {J�rg Tiedemann},
207
+ title = {Parallel Data, Tools and Interfaces in OPUS},
208
+ booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
209
+ year = {2012},
210
+ month = {may},
211
+ date = {23-25},
212
+ address = {Istanbul, Turkey},
213
+ editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
214
+ publisher = {European Language Resources Association (ELRA)},
215
+ isbn = {978-2-9517408-7-7},
216
+ language = {english}
217
+ }
218
+ ```
219
+
220
+ ### Contributions
221
+
222
+ Thanks to [@lucadiliello](https://github.com/lucadiliello) for adding this dataset.
dataset_infos.json ADDED
The diff for this file is too large to render. See raw diff
 
dummy/bg-cs/8.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34e46cc5b4298daa8371dd2232b36fe9c5e40a561f0fd5e1617131b0980017c6
3
+ size 9173
dummy/bg-da/8.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c1ebde2c139c1fe0b490f6dd7945008c8c991c27c42b1fd27fdfc9cb58e5ec1
3
+ size 9190
dummy/bg-de/8.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b331d9c977d8c6bc5f20e276f24f13712b22845400e98d6be1f66099d27268d
3
+ size 8452
dummy/bg-el/8.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d31175e7521edd6ad2e782c85569f4ae9714ce8f095818242d5a18d4adf2f31
3
+ size 9228
dummy/bg-en/8.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba372eb3cd52ba1523075aa7bffc85ead043099cfc45c334ac76a7923b4d2db3
3
+ size 9315
europarl_bilingual.py ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ from __future__ import absolute_import, division, print_function
17
+
18
+ import os
19
+ import xml.etree.ElementTree as ET
20
+
21
+ import datasets
22
+
23
+
24
+ # Find for instance the citation on arxiv or on the dataset repo/website
25
+ _CITATION = "http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf"
26
+
27
+ # You can copy an official description
28
+ _DESCRIPTION = """\
29
+ A parallel corpus extracted from the European Parliament web site by Philipp Koehn (University of Edinburgh). The main intended use is to aid statistical machine translation research.
30
+ """
31
+
32
+ # Add a link to an official homepage for the dataset here
33
+ _HOMEPAGE = "https://opus.nlpl.eu/Europarl.php"
34
+
35
+ # Add the licence for the dataset here if you can find it
36
+ _LICENSE = """\
37
+ The data set comes with the same license
38
+ as the original sources.
39
+ Please, check the information about the source
40
+ that is given on
41
+ http://opus.nlpl.eu/Europarl-v8.php
42
+ """
43
+
44
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
45
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
46
+ LANGUAGES = [
47
+ "bg",
48
+ "cs",
49
+ "da",
50
+ "de",
51
+ "el",
52
+ "en",
53
+ "es",
54
+ "et",
55
+ "fi",
56
+ "fr",
57
+ "hu",
58
+ "it",
59
+ "lt",
60
+ "lv",
61
+ "nl",
62
+ "pl",
63
+ "pt",
64
+ "ro",
65
+ "sk",
66
+ "sl",
67
+ "sv",
68
+ ]
69
+
70
+ ALL_PAIRS = []
71
+ for i in range(len(LANGUAGES)):
72
+ for j in range(i + 1, len(LANGUAGES)):
73
+ ALL_PAIRS.append((LANGUAGES[i], LANGUAGES[j]))
74
+
75
+ _VERSION = "8.0.0"
76
+ _BASE_URL_DATASET = "https://opus.nlpl.eu/download.php?f=Europarl/v8/raw/{}.zip"
77
+ _BASE_URL_RELATIONS = "https://opus.nlpl.eu/download.php?f=Europarl/v8/xml/{}-{}.xml.gz"
78
+
79
+
80
+ class EuroparlBilingualConfig(datasets.BuilderConfig):
81
+ """ Slightly custom config to require source and target languages. """
82
+
83
+ def __init__(self, *args, lang1=None, lang2=None, **kwargs):
84
+ super().__init__(
85
+ *args,
86
+ name=f"{lang1}-{lang2}",
87
+ **kwargs,
88
+ )
89
+ self.lang1 = lang1
90
+ self.lang2 = lang2
91
+
92
+ def _lang_pair(self):
93
+ return (self.lang1, self.lang2)
94
+
95
+ def _is_valid(self):
96
+ return self._lang_pair() in ALL_PAIRS
97
+
98
+
99
+ class EuroparlBilingual(datasets.GeneratorBasedBuilder):
100
+ """ Europarl contains aligned sentences in multiple west language pairs."""
101
+
102
+ VERSION = datasets.Version(_VERSION)
103
+
104
+ BUILDER_CONFIG_CLASS = EuroparlBilingualConfig
105
+ BUILDER_CONFIGS = [
106
+ EuroparlBilingualConfig(lang1=lang1, lang2=lang2, version=datasets.Version(_VERSION))
107
+ for lang1, lang2 in ALL_PAIRS[:5]
108
+ ]
109
+
110
+ def _info(self):
111
+ """ This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset. """
112
+ features = datasets.Features(
113
+ {
114
+ "translation": datasets.Translation(languages=(self.config.lang1, self.config.lang2)),
115
+ }
116
+ )
117
+
118
+ return datasets.DatasetInfo(
119
+ description=_DESCRIPTION,
120
+ features=features,
121
+ supervised_keys=None,
122
+ homepage=_HOMEPAGE,
123
+ license=_LICENSE,
124
+ citation=_CITATION,
125
+ )
126
+
127
+ def _split_generators(self, dl_manager):
128
+ """Returns SplitGenerators."""
129
+
130
+ if not self.config._is_valid():
131
+ raise ValueError(f"{self.config._lang_pair()} is not a supported language pair. Choose among: {ALL_PAIRS}")
132
+
133
+ # download data files
134
+ path_datafile_1 = dl_manager.download_and_extract(_BASE_URL_DATASET.format(self.config.lang1))
135
+ path_datafile_2 = dl_manager.download_and_extract(_BASE_URL_DATASET.format(self.config.lang2))
136
+
137
+ # download relations file
138
+ path_relation_file = dl_manager.download_and_extract(
139
+ _BASE_URL_RELATIONS.format(self.config.lang1, self.config.lang2)
140
+ )
141
+
142
+ return [
143
+ datasets.SplitGenerator(
144
+ name=datasets.Split.TRAIN,
145
+ # These kwargs will be passed to _generate_examples
146
+ gen_kwargs={
147
+ "path_datafiles": (path_datafile_1, path_datafile_2),
148
+ "path_relation_file": path_relation_file,
149
+ },
150
+ )
151
+ ]
152
+
153
+ @staticmethod
154
+ def _parse_xml_datafile(filepath):
155
+ """
156
+ Parse and return a Dict[sentence_id, text] representing data with the following structure:
157
+ """
158
+ document = ET.parse(filepath).getroot()
159
+ return {tag.attrib["id"]: tag.text for tag in document.iter("s")}
160
+
161
+ def _generate_examples(self, path_datafiles, path_relation_file):
162
+ """Yields examples.
163
+ In parenthesis the useful attributes
164
+
165
+ Lang files XML
166
+ - document
167
+ - CHAPTER ('ID')
168
+ - P ('id')
169
+ - s ('id')
170
+
171
+ Relation file XML
172
+ - cesAlign
173
+ - linkGrp ('fromDoc', 'toDoc')
174
+ - link ('xtargets': '1;1')
175
+ """
176
+
177
+ # my counter
178
+ _id = 0
179
+ relations_root = ET.parse(path_relation_file).getroot()
180
+
181
+ for linkGroup in relations_root:
182
+ # retrieve files and remove .gz extension because 'datasets' library already decompress them
183
+ from_doc_dict = EuroparlBilingual._parse_xml_datafile(
184
+ os.path.splitext(os.path.join(path_datafiles[0], "Europarl", "raw", linkGroup.attrib["fromDoc"]))[0]
185
+ )
186
+
187
+ to_doc_dict = EuroparlBilingual._parse_xml_datafile(
188
+ os.path.splitext(os.path.join(path_datafiles[1], "Europarl", "raw", linkGroup.attrib["toDoc"]))[0]
189
+ )
190
+
191
+ for link in linkGroup:
192
+ from_sentence_ids, to_sentence_ids = link.attrib["xtargets"].split(";")
193
+ from_sentence_ids = [i for i in from_sentence_ids.split(" ") if i]
194
+ to_sentence_ids = [i for i in to_sentence_ids.split(" ") if i]
195
+
196
+ if not len(from_sentence_ids) or not len(to_sentence_ids):
197
+ continue
198
+
199
+ # in rare cases, there is not entry for some key pairs
200
+ sentence_lang1 = " ".join(from_doc_dict[i] for i in from_sentence_ids if i in from_doc_dict)
201
+ sentence_lang2 = " ".join(to_doc_dict[i] for i in to_sentence_ids if i in to_doc_dict)
202
+
203
+ yield _id, {"translation": {self.config.lang1: sentence_lang1, self.config.lang2: sentence_lang2}}
204
+ _id += 1