lhoestq HF staff commited on
Commit
c639df6
1 Parent(s): 9dcf65e

Dataset infos in yaml (#4926)

Browse files

* wip

* fix Features yaml

* splits to yaml

* add _to_yaml_list

* style

* example: conll2000

* example: crime_and_punish

* add pyyaml dependency

* remove unused imports

* remove validation tests

* style

* allow dataset_infos to be struct or list in YAML

* fix test

* style

* update "datasets-cli test" + remove "version"

* remove config definitions in conll2000 and crime_and_punish

* remove versions for conll2000 and crime_and_punish

* move conll2000 and cap dummy data

* fix test

* add tests

* comments and tests

* more test

* don't mention the dataset_infos.json file in docs

* nit in docs

* docs

* dataset_infos -> dataset_info

* again

* use id2label in class_label

* update conll2000

* fix utf-8 yaml dump

* --save_infos -> --save_info

* Apply suggestions from code review

Co-authored-by: Polina Kazakova <[email protected]>

* style

* fix reloading a single dataset_info

* push info to README.md in push_to_hub

* update test

Co-authored-by: Polina Kazakova <[email protected]>

Commit from https://github.com/huggingface/datasets/commit/67e65c90e9490810b89ee140da11fdd13c356c9c

README.md CHANGED
@@ -3,6 +3,96 @@ language:
3
  - en
4
  paperswithcode_id: conll-2000-1
5
  pretty_name: CoNLL-2000
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  ---
7
 
8
  # Dataset Card for "conll2000"
@@ -173,4 +263,4 @@ The data fields are the same among all splits.
173
 
174
  ### Contributions
175
 
176
- Thanks to [@vblagoje](https://github.com/vblagoje), [@jplu](https://github.com/jplu) for adding this dataset.
 
3
  - en
4
  paperswithcode_id: conll-2000-1
5
  pretty_name: CoNLL-2000
6
+ dataset_info:
7
+ features:
8
+ - name: id
9
+ dtype: string
10
+ - name: tokens
11
+ sequence: string
12
+ - name: pos_tags
13
+ sequence:
14
+ class_label:
15
+ names:
16
+ 0: ''''''
17
+ 1: '#'
18
+ 2: $
19
+ 3: (
20
+ 4: )
21
+ 5: ','
22
+ 6: .
23
+ 7: ':'
24
+ 8: '``'
25
+ 9: CC
26
+ 10: CD
27
+ 11: DT
28
+ 12: EX
29
+ 13: FW
30
+ 14: IN
31
+ 15: JJ
32
+ 16: JJR
33
+ 17: JJS
34
+ 18: MD
35
+ 19: NN
36
+ 20: NNP
37
+ 21: NNPS
38
+ 22: NNS
39
+ 23: PDT
40
+ 24: POS
41
+ 25: PRP
42
+ 26: PRP$
43
+ 27: RB
44
+ 28: RBR
45
+ 29: RBS
46
+ 30: RP
47
+ 31: SYM
48
+ 32: TO
49
+ 33: UH
50
+ 34: VB
51
+ 35: VBD
52
+ 36: VBG
53
+ 37: VBN
54
+ 38: VBP
55
+ 39: VBZ
56
+ 40: WDT
57
+ 41: WP
58
+ 42: WP$
59
+ 43: WRB
60
+ - name: chunk_tags
61
+ sequence:
62
+ class_label:
63
+ names:
64
+ 0: O
65
+ 1: B-ADJP
66
+ 2: I-ADJP
67
+ 3: B-ADVP
68
+ 4: I-ADVP
69
+ 5: B-CONJP
70
+ 6: I-CONJP
71
+ 7: B-INTJ
72
+ 8: I-INTJ
73
+ 9: B-LST
74
+ 10: I-LST
75
+ 11: B-NP
76
+ 12: I-NP
77
+ 13: B-PP
78
+ 14: I-PP
79
+ 15: B-PRT
80
+ 16: I-PRT
81
+ 17: B-SBAR
82
+ 18: I-SBAR
83
+ 19: B-UCP
84
+ 20: I-UCP
85
+ 21: B-VP
86
+ 22: I-VP
87
+ splits:
88
+ - name: test
89
+ num_bytes: 1201151
90
+ num_examples: 2013
91
+ - name: train
92
+ num_bytes: 5356965
93
+ num_examples: 8937
94
+ download_size: 3481560
95
+ dataset_size: 6558116
96
  ---
97
 
98
  # Dataset Card for "conll2000"
 
263
 
264
  ### Contributions
265
 
266
+ Thanks to [@vblagoje](https://github.com/vblagoje), [@jplu](https://github.com/jplu) for adding this dataset.
conll2000.py CHANGED
@@ -53,25 +53,9 @@ _TRAINING_FILE = "train.txt"
53
  _TEST_FILE = "test.txt"
54
 
55
 
56
- class Conll2000Config(datasets.BuilderConfig):
57
- """BuilderConfig for Conll2000"""
58
-
59
- def __init__(self, **kwargs):
60
- """BuilderConfig forConll2000.
61
-
62
- Args:
63
- **kwargs: keyword arguments forwarded to super.
64
- """
65
- super(Conll2000Config, self).__init__(**kwargs)
66
-
67
-
68
  class Conll2000(datasets.GeneratorBasedBuilder):
69
  """Conll2000 dataset."""
70
 
71
- BUILDER_CONFIGS = [
72
- Conll2000Config(name="conll2000", version=datasets.Version("1.0.0"), description="Conll2000 dataset"),
73
- ]
74
-
75
  def _info(self):
76
  return datasets.DatasetInfo(
77
  description=_DESCRIPTION,
 
53
  _TEST_FILE = "test.txt"
54
 
55
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  class Conll2000(datasets.GeneratorBasedBuilder):
57
  """Conll2000 dataset."""
58
 
 
 
 
 
59
  def _info(self):
60
  return datasets.DatasetInfo(
61
  description=_DESCRIPTION,
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"conll2000": {"description": " Text chunking consists of dividing a text in syntactically correlated parts of words. For example, the sentence\n He reckons the current account deficit will narrow to only # 1.8 billion in September . can be divided as follows:\n[NP He ] [VP reckons ] [NP the current account deficit ] [VP will narrow ] [PP to ] [NP only # 1.8 billion ]\n[PP in ] [NP September ] .\n\nText chunking is an intermediate step towards full parsing. It was the shared task for CoNLL-2000. Training and test\ndata for this task is available. This data consists of the same partitions of the Wall Street Journal corpus (WSJ)\nas the widely used data for noun phrase chunking: sections 15-18 as training data (211727 tokens) and section 20 as\ntest data (47377 tokens). The annotation of the data has been derived from the WSJ corpus by a program written by\nSabine Buchholz from Tilburg University, The Netherlands.\n", "citation": "@inproceedings{tksbuchholz2000conll,\n author = \"Tjong Kim Sang, Erik F. and Sabine Buchholz\",\n title = \"Introduction to the CoNLL-2000 Shared Task: Chunking\",\n editor = \"Claire Cardie and Walter Daelemans and Claire\n Nedellec and Tjong Kim Sang, Erik\",\n booktitle = \"Proceedings of CoNLL-2000 and LLL-2000\",\n publisher = \"Lisbon, Portugal\",\n pages = \"127--132\",\n year = \"2000\"\n}\n", "homepage": "https://www.clips.uantwerpen.be/conll2000/chunking/", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags": {"feature": {"num_classes": 44, "names": ["''", "#", "$", "(", ")", ",", ".", ":", "``", "CC", "CD", "DT", "EX", "FW", "IN", "JJ", "JJR", "JJS", "MD", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WP$", "WRB"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "chunk_tags": {"feature": {"num_classes": 23, "names": ["O", "B-ADJP", "I-ADJP", "B-ADVP", "I-ADVP", "B-CONJP", "I-CONJP", "B-INTJ", "I-INTJ", "B-LST", "I-LST", "B-NP", "I-NP", "B-PP", "I-PP", "B-PRT", "I-PRT", "B-SBAR", "I-SBAR", "B-UCP", "I-UCP", "B-VP", "I-VP"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "conll2000", "config_name": "conll2000", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5356965, "num_examples": 8937, "dataset_name": "conll2000"}, "test": {"name": "test", "num_bytes": 1201151, "num_examples": 2013, "dataset_name": "conll2000"}}, "download_checksums": {"https://github.com/teropa/nlp/raw/master/resources/corpora/conll2000/train.txt": {"num_bytes": 2842164, "checksum": "82033cd7a72b209923a98007793e8f9de3abc1c8b79d646c50648eb949b87cea"}, "https://github.com/teropa/nlp/raw/master/resources/corpora/conll2000/test.txt": {"num_bytes": 639396, "checksum": "73b7b1e565fa75a1e22fe52ecdf41b6624d6f59dacb591d44252bf4d692b1628"}}, "download_size": 3481560, "post_processing_size": null, "dataset_size": 6558116, "size_in_bytes": 10039676}}
 
 
dummy/{conll2000/1.0.0 → 0.0.0}/dummy_data.zip RENAMED
File without changes