davidstap commited on
Commit
05a2212
1 Parent(s): 6a710b9

add script and README

Browse files
Files changed (2) hide show
  1. README.md +50 -0
  2. sranantongo.py +103 -0
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - srn
4
+ - nl
5
+ multilinguality:
6
+ - translation
7
+ pretty_name: sranantongo
8
+ task_categories:
9
+ - translation
10
+ ---
11
+ ## Dataset Description
12
+ Dataset for Sranantongo, an English-based creole language spoken as lingua franca in Suriname. The dataset includes monolingual data as well as Sranantongo-Dutch parallel data.
13
+
14
+ The following splits are available:
15
+ * `srn`
16
+ Monolingual data scraped from SIL. There is a `train` split (6570 sentences) available.
17
+ * `srn-nl_jw`
18
+ Parallel data srn-nl originating from Jehova Witnesses. There are `train` (299085 sentences), `validation` (256 sentences), and `test` (256 sentences) splits available.
19
+ * `srn-nl_other`
20
+ Parallel data srn-nl originating from Z-Library, Naks Sranan facebook and Dutch DOJ. There are `train` (3610 sentences), `validation` (256 sentences), and `test` (256 sentences) splits available.
21
+
22
+ For more details see the accompanying paper https://arxiv.org/abs/2212.06383
23
+
24
+
25
+ ## Using dataset
26
+ Example of loading monolingual data:
27
+ ```python
28
+ dataset = load_dataset("davidstap/sranantongo", "srn", trust_remote_code=True)
29
+ ```
30
+
31
+ Example of loading parallel JW data:
32
+ ```python
33
+ dataset = load_dataset("davidstap/sranantongo", "srn-nl_jw", trust_remote_code=True)
34
+ ```
35
+
36
+ Example of loading parallel other data:
37
+ ```python
38
+ dataset = load_dataset("davidstap/sranantongo", "srn-nl_other", trust_remote_code=True)
39
+ ```
40
+
41
+ ### Citation Information
42
+
43
+ ```
44
+ @article{zwennicker2022towards,
45
+ title={Towards a general purpose machine translation system for Sranantongo},
46
+ author={Zwennicker, Just and Stap, David},
47
+ journal={arXiv preprint arXiv:2212.06383},
48
+ year={2022}
49
+ }
50
+ ```
sranantongo.py ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import datasets
2
+
3
+
4
+ _DESCRIPTION = """\
5
+ This datasets consists of monolingual (Sranantongo) and parallel (Sranantongo - Dutch) data.
6
+ """
7
+
8
+ _CITATION = """\
9
+ @article{zwennicker2022towards,
10
+ title={Towards a general purpose machine translation system for Sranantongo},
11
+ author={Zwennicker, Just and Stap, David},
12
+ journal={arXiv preprint arXiv:2212.06383},
13
+ year={2022}
14
+ }
15
+ """
16
+
17
+ _DATA_URL = "data/sranantongo.tar"
18
+
19
+ _LANGUAGE2FILES = {
20
+ "srn": {"train": "srn_mono_SIL.csv", "validation": None, "test": None},
21
+ "srn-nl_jw": {split:f"srn-nl_JW_{split}.csv" for split in ["train", "validation", "test"]},
22
+ "srn-nl_other": {split:f"srn-nl_other_{split}.csv" for split in ["train", "validation", "test"]},
23
+ }
24
+
25
+
26
+ class SranantongoConfig(datasets.BuilderConfig):
27
+ """BuilderConfig for Sranantongo dataset."""
28
+
29
+ def __init__(self, name=str, **kwargs):
30
+ self.name = name
31
+ description = "Monolingual sentences in `Sranantongo`." if "mono" in self.name else f"Parallel sentences in `Sranantongo` and `Dutch`."
32
+ super(SranantongoConfig, self).__init__(name=self.name, description=description, **kwargs)
33
+
34
+
35
+ class Sranantongo(datasets.GeneratorBasedBuilder):
36
+ """Sranantongo data from https://arxiv.org/abs/2212.06383"""
37
+ BUILDER_CONFIGS = [
38
+ SranantongoConfig(name=name, version=datasets.Version("1.0.0", ""))
39
+ for name in _LANGUAGE2FILES.keys()
40
+ ]
41
+
42
+
43
+ def _info(self):
44
+ return datasets.DatasetInfo(
45
+ description=_DESCRIPTION,
46
+ features=datasets.Features(
47
+ {
48
+ "srn": datasets.Value("string"),
49
+ **(
50
+ {"nl": datasets.Value("string")}
51
+ if "srn-nl" in self.config.name else {}
52
+ )
53
+ }
54
+ ),
55
+ homepage="https://arxiv.org/abs/2212.06383",
56
+ citation=_CITATION,
57
+ )
58
+
59
+ def _split_generators(self, dl_manager):
60
+ files = dl_manager.iter_archive(dl_manager.download(_DATA_URL))
61
+
62
+ # Always generate the train split
63
+ generators = [
64
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"files": files, "split": "train"})
65
+ ]
66
+
67
+ # If the dataset configuration is for parallel data, add validation and test splits
68
+ if "srn-nl" in self.config.name:
69
+ for split in [datasets.Split.VALIDATION, datasets.Split.TEST]:
70
+ generators.append(
71
+ datasets.SplitGenerator(name=split, gen_kwargs={"files": files, "split": split})
72
+ )
73
+
74
+ return generators
75
+
76
+
77
+ def _generate_examples(self, split, files):
78
+ """Returns examples as raw text."""
79
+
80
+ if "srn-nl" in self.config.name:
81
+ return self._generate_examples_parallel(split=split, files=files)
82
+ else:
83
+ return self._generate_examples_mono(split=split, files=files)
84
+
85
+
86
+ def _generate_examples_mono(self, split, files):
87
+ for path, file in files:
88
+ if path == _LANGUAGE2FILES[self.config.name][split]:
89
+ data = file.read().decode("utf-8").split("\n")
90
+
91
+ for idx, sentence in enumerate(data):
92
+ yield idx, {"srn": sentence}
93
+
94
+
95
+
96
+ def _generate_examples_parallel(self, split, files):
97
+ for path, file in files:
98
+ if path == _LANGUAGE2FILES[self.config.name][split]:
99
+ data = file.read().decode("utf-8").split("\n")
100
+
101
+ for idx, sentence in enumerate(data):
102
+ nl, srn = sentence.split("|")
103
+ yield idx, {"nl": nl, "srn": srn}