ljvmiranda921 commited on
Commit
371f072
1 Parent(s): 442fac5

Convert dataset to Parquet (#3)

Browse files

- Convert dataset to Parquet (4f42950b88068daffdfeb6cdaddefcdf1766e410)
- Delete data file (eec29dee86fa696deab9d501d43670114d92d371)
- Delete data file (7bc81e1325c40461e46868f1996094c3562b293f)
- Delete loading script auxiliary file (ceed5c6fd35283875b53ac57159550e7b1a9dd44)
- Delete loading script (414fb68177d0be7e9cbf949ae10f71b734ae64c1)
- Delete data file (8d31d9744b24fd1199b78eb963d8c5c89f9feabb)
- Delete data file (d96e585635d36d47896f5787327520a18c7759b4)
- Delete data file (ab5b4d4f2884fa55d8b9db412cb0f2bebe6a3cf4)
- Delete data file (1878ee08cf261063cac413f699e7c1618e2b4123)

.gitignore DELETED
@@ -1,4 +0,0 @@
1
- assets
2
- corpus/spacy
3
- __pycache__/
4
- project.lock
 
 
 
 
 
README.md CHANGED
@@ -1,34 +1,72 @@
1
  ---
 
 
 
 
2
  license: gpl-3.0
 
 
 
 
3
  task_categories:
4
  - token-classification
5
  task_ids:
6
  - named-entity-recognition
7
- language:
8
- - tl
9
- size_categories:
10
- - 1K<n<10K
11
  pretty_name: TLUnified-NER
12
  tags:
13
  - low-resource
14
  - named-entity-recognition
15
- annotations_creators:
16
- - expert-generated
17
- multilinguality:
18
- - monolingual
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  train-eval-index:
20
- - config: conllpp
21
- task: token-classification
22
- task_id: entity_extraction
23
- splits:
24
- train_split: train
25
- eval_split: test
26
- col_mapping:
27
- tokens: tokens
28
- ner_tags: tags
29
- metrics:
30
- - type: seqeval
31
- name: seqeval
32
  ---
33
 
34
  <!-- SPACY PROJECT: AUTO-GENERATED DOCS START (do not remove) -->
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language:
5
+ - tl
6
  license: gpl-3.0
7
+ multilinguality:
8
+ - monolingual
9
+ size_categories:
10
+ - 1K<n<10K
11
  task_categories:
12
  - token-classification
13
  task_ids:
14
  - named-entity-recognition
 
 
 
 
15
  pretty_name: TLUnified-NER
16
  tags:
17
  - low-resource
18
  - named-entity-recognition
19
+ dataset_info:
20
+ features:
21
+ - name: id
22
+ dtype: string
23
+ - name: tokens
24
+ sequence: string
25
+ - name: ner_tags
26
+ sequence:
27
+ class_label:
28
+ names:
29
+ '0': O
30
+ '1': B-PER
31
+ '2': I-PER
32
+ '3': B-ORG
33
+ '4': I-ORG
34
+ '5': B-LOC
35
+ '6': I-LOC
36
+ splits:
37
+ - name: train
38
+ num_bytes: 3380392
39
+ num_examples: 6252
40
+ - name: validation
41
+ num_bytes: 427069
42
+ num_examples: 782
43
+ - name: test
44
+ num_bytes: 426247
45
+ num_examples: 782
46
+ download_size: 971039
47
+ dataset_size: 4233708
48
+ configs:
49
+ - config_name: default
50
+ data_files:
51
+ - split: train
52
+ path: data/train-*
53
+ - split: validation
54
+ path: data/validation-*
55
+ - split: test
56
+ path: data/test-*
57
  train-eval-index:
58
+ - config: conllpp
59
+ task: token-classification
60
+ task_id: entity_extraction
61
+ splits:
62
+ train_split: train
63
+ eval_split: test
64
+ col_mapping:
65
+ tokens: tokens
66
+ ner_tags: tags
67
+ metrics:
68
+ - type: seqeval
69
+ name: seqeval
70
  ---
71
 
72
  <!-- SPACY PROJECT: AUTO-GENERATED DOCS START (do not remove) -->
corpus/iob/dev.iob → data/test-00000-of-00001.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:60b4f5cda0630db8bbb78ba3f0665a6a24561c7b55ad837a9fd358146362e968
3
- size 217710
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50602b8a71d436d297398adeeb4209b2306df63f54fccfbcfac1cd502c654252
3
+ size 101856
corpus/iob/test.iob → data/train-00000-of-00001.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3bee9209379e16958ef5b8fadd5a6aaff6f02310c46fa3fcb28974e56901c998
3
- size 216121
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f64e43b6019ae35b4055371b89c12b180510152893155975427d6946d6678a61
3
+ size 767881
corpus/iob/train.iob → data/validation-00000-of-00001.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c499c3b8f1766f56f7f6b3380e88de44eb7e694941cbbfc3c9583f54132076db
3
- size 1715843
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea9bb253e5e6b9827b7e47dfb31edb27d5eafec223cecc3decd13ae2017576c6
3
+ size 101302
project.yml DELETED
@@ -1,87 +0,0 @@
1
- title: "TLUnified-NER Corpus"
2
- description: |
3
-
4
- - **Homepage:** [Github](https://github.com/ljvmiranda921/calamanCy)
5
- - **Repository:** [Github](https://github.com/ljvmiranda921/calamanCy)
6
- - **Point of Contact:** [email protected]
7
-
8
- ### Dataset Summary
9
-
10
- This dataset contains the annotated TLUnified corpora from Cruz and Cheng
11
- (2021). It is a curated sample of around 7,000 documents for the
12
- named entity recognition (NER) task. The majority of the corpus are news
13
- reports in Tagalog, resembling the domain of the original ConLL 2003. There
14
- are three entity types: Person (PER), Organization (ORG), and Location (LOC).
15
-
16
- | Dataset | Examples | PER | ORG | LOC |
17
- |-------------|----------|------|------|------|
18
- | Train | 6252 | 6418 | 3121 | 3296 |
19
- | Development | 782 | 793 | 392 | 409 |
20
- | Test | 782 | 818 | 423 | 438 |
21
-
22
- ### Data Fields
23
-
24
- The data fields are the same among all splits:
25
- - `id`: a `string` feature
26
- - `tokens`: a `list` of `string` features.
27
- - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4), `B-LOC` (5), `I-LOC` (6)
28
-
29
- ### Annotation process
30
-
31
- The author, together with two more annotators, labeled curated portions of
32
- TLUnified in the course of four months. All annotators are native speakers of
33
- Tagalog. For each annotation round, the annotators resolved disagreements,
34
- updated the annotation guidelines, and corrected past annotations. They
35
- followed the process prescribed by [Reiters
36
- (2017)](https://nilsreiter.de/blog/2017/howto-annotation).
37
-
38
- They also measured the inter-annotator agreement (IAA) by computing pairwise
39
- comparisons and averaging the results:
40
- - Cohen's Kappa (all tokens): 0.81
41
- - Cohen's Kappa (annotated tokens only): 0.65
42
- - F1-score: 0.91
43
-
44
- ### About this repository
45
-
46
- This repository is a [spaCy project](https://spacy.io/usage/projects) for
47
- converting the annotated spaCy files into IOB. The process goes like this: we
48
- download the raw corpus from Google Cloud Storage (GCS), convert the spaCy
49
- files into a readable IOB format, and parse that using our loading script
50
- (i.e., `tlunified-ner.py`). We're also shipping the IOB file so that it's
51
- easier to access.
52
-
53
- directories: ["assets", "corpus/spacy", "corpus/iob"]
54
-
55
- vars:
56
- version: 1.0
57
-
58
- assets:
59
- - dest: assets/corpus.tar.gz
60
- description: "Annotated TLUnified corpora in spaCy format with train, dev, and test splits."
61
- url: "https://storage.googleapis.com/ljvmiranda/calamanCy/tl_tlunified_gold/v${vars.version}/corpus.tar.gz"
62
-
63
- workflows:
64
- all:
65
- - "setup-data"
66
- - "upload-to-hf"
67
-
68
- commands:
69
- - name: "setup-data"
70
- help: "Prepare the Tagalog corpora used for training various spaCy components"
71
- script:
72
- - mkdir -p corpus/spacy
73
- - tar -xzvf assets/corpus.tar.gz -C corpus/spacy
74
- - python -m spacy_to_iob corpus/spacy/ corpus/iob/
75
- outputs:
76
- - corpus/iob/train.iob
77
- - corpus/iob/dev.iob
78
- - corpus/iob/test.iob
79
-
80
- - name: "upload-to-hf"
81
- help: "Upload dataset to HuggingFace Hub"
82
- script:
83
- - git push
84
- deps:
85
- - corpus/iob/train.iob
86
- - corpus/iob/dev.iob
87
- - corpus/iob/test.iob
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
requirements.txt DELETED
@@ -1,5 +0,0 @@
1
- spacy
2
- typer
3
- datasets
4
- huggingface_hub
5
- wasabi
 
 
 
 
 
 
spacy_to_iob.py DELETED
@@ -1,50 +0,0 @@
1
- from pathlib import Path
2
-
3
- import spacy
4
- import typer
5
- from spacy.tokens import DocBin
6
- from wasabi import msg
7
-
8
- DELIMITER = "-DOCSTART- -X- O O"
9
-
10
-
11
- def spacy_to_iob(
12
- # fmt: off
13
- spacy_indir: Path = typer.Argument(..., help="Path to the directory containing the spaCy files."),
14
- iob_outdir: Path = typer.Argument(..., help="Path to the directory to save the IOB files."),
15
- lang: str = typer.Option("tl", "-l", "--lang", help="Language code for the spaCy vocab."),
16
- verbose: bool = typer.Option(False, "-v", "--verbose", help="Print additional information."),
17
- delimiter: str = typer.Option(DELIMITER, "-d", "--delimiter", help="Delimiter between examples.")
18
- # fmt: on
19
- ):
20
- """Convert spaCy files into IOB-formatted files."""
21
- nlp = spacy.blank(lang)
22
- for spacy_file in spacy_indir.glob(f"*.spacy"):
23
- msg.text(f"Converting {str(spacy_file)}", show=verbose)
24
- doc_bin = DocBin().from_disk(spacy_file)
25
- docs = doc_bin.get_docs(nlp.vocab)
26
-
27
- lines = [] # container for the IOB lines later on
28
- for doc in docs:
29
- lines.append(delimiter)
30
- lines.append("\n\n")
31
- for token in doc:
32
- label = (
33
- f"{token.ent_iob_}-{token.ent_type_}"
34
- if token.ent_iob_ != "O"
35
- else "O"
36
- )
37
- line = f"{token.text}\t{label}"
38
- lines.append(line)
39
- lines.append("\n")
40
- lines.append("\n")
41
-
42
- iob_file = iob_outdir / f"{spacy_file.stem}.iob"
43
- with open(iob_file, "w", encoding="utf-8") as f:
44
- f.writelines(lines)
45
-
46
- msg.good(f"Saved to {iob_file}")
47
-
48
-
49
- if __name__ == "__main__":
50
- typer.run(spacy_to_iob)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tlunified-ner.py DELETED
@@ -1,95 +0,0 @@
1
- from typing import List
2
-
3
- import datasets
4
-
5
- logger = datasets.logging.get_logger(__name__)
6
-
7
- _DESCRIPTION = """
8
- This dataset contains the annotated TLUnified corpora from Cruz and Cheng
9
- (2021). It is a curated sample of around 7,000 documents for the
10
- named entity recognition (NER) task. The majority of the corpus are news
11
- reports in Tagalog, resembling the domain of the original ConLL 2003. There
12
- are three entity types: Person (PER), Organization (ORG), and Location (LOC).
13
- """
14
- _LICENSE = """GNU GPL v3.0"""
15
- _URL = "https://huggingface.co/ljvmiranda921/tlunified-ner"
16
- _CLASSES = ["O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC"]
17
- _VERSION = "1.0.0"
18
-
19
-
20
- class TLUnifiedNERConfig(datasets.BuilderConfig):
21
- def __init__(self, **kwargs):
22
- super(TLUnifiedNER, self).__init__(**kwargs)
23
-
24
-
25
- class TLUnifiedNER(datasets.GeneratorBasedBuilder):
26
- """Contains an annotated version of the TLUnified dataset from Cruz and Cheng (2021)."""
27
-
28
- VERSION = datasets.Version(_VERSION)
29
-
30
- def _info(self) -> "datasets.DatasetInfo":
31
- return datasets.DatasetInfo(
32
- description=_DESCRIPTION,
33
- features=datasets.Features(
34
- {
35
- "id": datasets.Value("string"),
36
- "tokens": datasets.Sequence(datasets.Value("string")),
37
- "ner_tags": datasets.Sequence(
38
- datasets.features.ClassLabel(names=_CLASSES)
39
- ),
40
- }
41
- ),
42
- homepage=_URL,
43
- supervised_keys=None,
44
- )
45
-
46
- def _split_generators(
47
- self, dl_manager: "datasets.builder.DownloadManager"
48
- ) -> List["datasets.SplitGenerator"]:
49
- """Return a list of SplitGenerators that organizes the splits."""
50
- # The file extracts into {train,dev,test}.spacy files. The _generate_examples function
51
- # below will define how these files are parsed.
52
- data_files = {
53
- "train": dl_manager.download_and_extract("corpus/iob/train.iob"),
54
- "dev": dl_manager.download_and_extract("corpus/iob/dev.iob"),
55
- "test": dl_manager.download_and_extract("corpus/iob/test.iob"),
56
- }
57
-
58
- return [
59
- # fmt: off
60
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_files["train"]}),
61
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": data_files["dev"]}),
62
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": data_files["test"]}),
63
- # fmt: on
64
- ]
65
-
66
- def _generate_examples(self, filepath: str):
67
- """Defines how examples are parsed from the IOB file."""
68
- logger.info("⏳ Generating examples from = %s", filepath)
69
- with open(filepath, encoding="utf-8") as f:
70
- guid = 0
71
- tokens = []
72
- ner_tags = []
73
- for line in f:
74
- if line.startswith("-DOCSTART-") or line == "" or line == "\n":
75
- if tokens:
76
- yield guid, {
77
- "id": str(guid),
78
- "tokens": tokens,
79
- "ner_tags": ner_tags,
80
- }
81
- guid += 1
82
- tokens = []
83
- ner_tags = []
84
- else:
85
- # TLUnified-NER iob are separated by \t
86
- token, ner_tag = line.split("\t")
87
- tokens.append(token)
88
- ner_tags.append(ner_tag.rstrip())
89
- # Last example
90
- if tokens:
91
- yield guid, {
92
- "id": str(guid),
93
- "tokens": tokens,
94
- "ner_tags": ner_tags,
95
- }