Datasets:

system HF staff commited on
Commit
9cdab01
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - st
8
+ licenses:
9
+ - other-Creative Commons Attribution 2-5 South Africa License
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - structure-prediction
18
+ task_ids:
19
+ - named-entity-recognition
20
+ ---
21
+
22
+ # Dataset Card for Sesotho NER Corpus
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** [Sesotho Ner Corpus Homepage](https://repo.sadilar.org/handle/20.500.12185/334)
50
+ - **Repository:**
51
+ - **Paper:**
52
+ - **Leaderboard:**
53
+ - **Point of Contact:** [Martin Puttkammer](mailto:[email protected])
54
+
55
+ ### Dataset Summary
56
+
57
+ The Sesotho Ner Corpus is a Sesotho dataset developed by [The Centre for Text Technology (CTexT), North-West University, South Africa](http://humanities.nwu.ac.za/ctext). The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Sesotho language. The dataset uses CoNLL shared task annotation standards.
58
+
59
+ ### Supported Tasks and Leaderboards
60
+
61
+ [More Information Needed]
62
+
63
+ ### Languages
64
+
65
+ The language supported is Sesotho.
66
+
67
+ ## Dataset Structure
68
+
69
+ ### Data Instances
70
+
71
+ A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.
72
+ ```
73
+ {'id': '0',
74
+ 'ner_tags': [0, 0, 0, 0, 0],
75
+ 'tokens': ['Morero', 'wa', 'weposaete', 'ya', 'Ditshebeletso']
76
+ }
77
+ ```
78
+
79
+ ### Data Fields
80
+
81
+ - `id`: id of the sample
82
+ - `tokens`: the tokens of the example text
83
+ - `ner_tags`: the NER tags of each token
84
+
85
+ The NER tags correspond to this list:
86
+ ```
87
+ "OUT", "B-PERS", "I-PERS", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC",
88
+ ```
89
+ The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity.
90
+
91
+ ### Data Splits
92
+
93
+ The data was not split.
94
+
95
+ ## Dataset Creation
96
+
97
+ ### Curation Rationale
98
+
99
+ The data was created to help introduce resources to new language - Sesotho.
100
+
101
+ [More Information Needed]
102
+
103
+ ### Source Data
104
+
105
+ #### Initial Data Collection and Normalization
106
+
107
+ The data is based on South African government domain and was crawled from gov.za websites.
108
+
109
+ #### Who are the source language producers?
110
+
111
+ The data was produced by writers of South African government websites - gov.za
112
+
113
+ [More Information Needed]
114
+
115
+ ### Annotations
116
+
117
+ #### Annotation process
118
+
119
+ [More Information Needed]
120
+
121
+ #### Who are the annotators?
122
+
123
+ The data was annotated during the NCHLT text resource development project.
124
+
125
+ [More Information Needed]
126
+
127
+ ### Personal and Sensitive Information
128
+
129
+ [More Information Needed]
130
+
131
+ ## Considerations for Using the Data
132
+
133
+ ### Social Impact of Dataset
134
+
135
+ [More Information Needed]
136
+
137
+ ### Discussion of Biases
138
+
139
+ [More Information Needed]
140
+
141
+ ### Other Known Limitations
142
+
143
+ [More Information Needed]
144
+
145
+ ## Additional Information
146
+
147
+ ### Dataset Curators
148
+
149
+ The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa).
150
+
151
+ See: [more information](http://www.nwu.ac.za/ctext)
152
+
153
+ ### Licensing Information
154
+
155
+ The data is under the [Creative Commons Attribution 2.5 South Africa License](http://creativecommons.org/licenses/by/2.5/za/legalcode)
156
+
157
+ ### Citation Information
158
+
159
+ ```
160
+ @inproceedings{sesotho_ner_corpus,
161
+ author = {M. Setaka and
162
+ Roald Eiselen},
163
+ title = {NCHLT Sesotho Named Entity Annotated Corpus},
164
+ booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.},
165
+ year = {2016},
166
+ url = {https://repo.sadilar.org/handle/20.500.12185/334},
167
+ }
168
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"sesotho_ner_corpus": {"description": "Named entity annotated data from the NCHLT Text Resource Development: Phase II Project, annotated with PERSON, LOCATION, ORGANISATION and MISCELLANEOUS tags.\n", "citation": "@inproceedings{sesotho_ner_corpus,\n author = {M. Setaka and \n Roald Eiselen},\n title = {NCHLT Sesotho Named Entity Annotated Corpus},\n booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portoro\u017e, Slovenia.},\n year = {2016},\n url = {https://repo.sadilar.org/handle/20.500.12185/334},\n}\n", "homepage": "https://repo.sadilar.org/handle/20.500.12185/334", "license": "Creative Commons Attribution 2.5 South Africa License", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 9, "names": ["OUT", "B-PERS", "I-PERS", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "sesotho_ner_corpus", "config_name": "sesotho_ner_corpus", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4502576, "num_examples": 9472, "dataset_name": "sesotho_ner_corpus"}}, "download_checksums": {"https://repo.sadilar.org/bitstream/handle/20.500.12185/334/nchlt_sesotho_named_entity_annotated_corpus.zip?sequence=3&isAllowed=y": {"num_bytes": 30421109, "checksum": "986305f6acacc288e2eea35b5e0fc1d53102738c7b8f39cf35948d68d2f91ce5"}}, "download_size": 30421109, "post_processing_size": null, "dataset_size": 4502576, "size_in_bytes": 34923685}}
dummy/sesotho_ner_corpus/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7cebe24bea16f65fb99eb8c17a5e26c6b86aa4cee0d32450f7f36f5e6a55086a
3
+ size 1664
sesotho_ner_corpus.py ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ Named entity annotated data from the NCHLT Text Resource Development: Phase II Project for Sesotho"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import logging
20
+ import os
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @inproceedings{sesotho_ner_corpus,
27
+ author = {M. Setaka and
28
+ Roald Eiselen},
29
+ title = {NCHLT Sesotho Named Entity Annotated Corpus},
30
+ booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.},
31
+ year = {2016},
32
+ url = {https://repo.sadilar.org/handle/20.500.12185/334},
33
+ }
34
+ """
35
+
36
+
37
+ _DESCRIPTION = """\
38
+ Named entity annotated data from the NCHLT Text Resource Development: Phase II Project, annotated with PERSON, LOCATION, ORGANISATION and MISCELLANEOUS tags.
39
+ """
40
+
41
+
42
+ _HOMEPAGE = "https://repo.sadilar.org/handle/20.500.12185/334"
43
+
44
+
45
+ _LICENSE = "Creative Commons Attribution 2.5 South Africa License"
46
+
47
+
48
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
49
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
50
+ _URL = "https://repo.sadilar.org/bitstream/handle/20.500.12185/334/nchlt_sesotho_named_entity_annotated_corpus.zip?sequence=3&isAllowed=y"
51
+
52
+ _EXTRACTED_FILE = "NCHLT Sesotho Named Entity Annotated Corpus/Dataset.NCHLT-II.st.NER.Full.txt"
53
+
54
+
55
+ class SesothoNerCorpusConfig(datasets.BuilderConfig):
56
+ """BuilderConfig for SesothoNerCorpus"""
57
+
58
+ def __init__(self, **kwargs):
59
+ """BuilderConfig for SesothoNerCorpus.
60
+ Args:
61
+ **kwargs: keyword arguments forwarded to super.
62
+ """
63
+ super(SesothoNerCorpusConfig, self).__init__(**kwargs)
64
+
65
+
66
+ class SesothoNerCorpus(datasets.GeneratorBasedBuilder):
67
+ """ SesothoNerCorpus Ner dataset"""
68
+
69
+ BUILDER_CONFIGS = [
70
+ SesothoNerCorpusConfig(
71
+ name="sesotho_ner_corpus",
72
+ version=datasets.Version("1.0.0"),
73
+ description="SesothoNerCorpus dataset",
74
+ ),
75
+ ]
76
+
77
+ def _info(self):
78
+ return datasets.DatasetInfo(
79
+ description=_DESCRIPTION,
80
+ features=datasets.Features(
81
+ {
82
+ "id": datasets.Value("string"),
83
+ "tokens": datasets.Sequence(datasets.Value("string")),
84
+ "ner_tags": datasets.Sequence(
85
+ datasets.features.ClassLabel(
86
+ names=[
87
+ "OUT",
88
+ "B-PERS",
89
+ "I-PERS",
90
+ "B-ORG",
91
+ "I-ORG",
92
+ "B-LOC",
93
+ "I-LOC",
94
+ "B-MISC",
95
+ "I-MISC",
96
+ ]
97
+ )
98
+ ),
99
+ }
100
+ ),
101
+ supervised_keys=None,
102
+ homepage=_HOMEPAGE,
103
+ license=_LICENSE,
104
+ citation=_CITATION,
105
+ )
106
+
107
+ def _split_generators(self, dl_manager):
108
+ data_dir = dl_manager.download_and_extract(_URL)
109
+ return [
110
+ datasets.SplitGenerator(
111
+ name=datasets.Split.TRAIN,
112
+ gen_kwargs={"filepath": os.path.join(data_dir, _EXTRACTED_FILE)},
113
+ ),
114
+ ]
115
+
116
+ def _generate_examples(self, filepath):
117
+ logging.info("⏳ Generating examples from = %s", filepath)
118
+ with open(filepath, encoding="utf-8") as f:
119
+ guid = 0
120
+ tokens = []
121
+ ner_tags = []
122
+ for line in f:
123
+ if line == "" or line == "\n":
124
+ if tokens:
125
+ yield guid, {
126
+ "id": str(guid),
127
+ "tokens": tokens,
128
+ "ner_tags": ner_tags,
129
+ }
130
+ guid += 1
131
+ tokens = []
132
+ ner_tags = []
133
+ else:
134
+ splits = line.split("\t")
135
+ tokens.append(splits[0])
136
+ ner_tags.append(splits[1].rstrip())
137
+ yield guid, {
138
+ "id": str(guid),
139
+ "tokens": tokens,
140
+ "ner_tags": ner_tags,
141
+ }