lhoestq HF staff commited on
Commit
0754063
1 Parent(s): 72bffe5

Fix crd3 (#4705)

Browse files

* fix crd3

* fix dummy data

Commit from https://github.com/huggingface/datasets/commit/c15b391942764152f6060b59921b09cacc5f22a6

Files changed (4) hide show
  1. README.md +4 -15
  2. crd3.py +11 -9
  3. dataset_infos.json +1 -1
  4. dummy/0.0.0/dummy_data.zip +2 -2
README.md CHANGED
@@ -55,9 +55,6 @@ paperswithcode_id: crd3
55
  - **Repository:** [CRD3 repository](https://github.com/RevanthRameshkumar/CRD3)
56
  - **Paper:** [Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset](https://www.aclweb.org/anthology/2020.acl-main.459/)
57
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
58
- - **Size of downloaded dataset files:** 279.93 MB
59
- - **Size of the generated dataset:** 4020.33 MB
60
- - **Total amount of disk used:** 4300.25 MB
61
 
62
  ### Dataset Summary
63
 
@@ -69,6 +66,7 @@ collaboration and spoken interaction. For each dialogue, there are a large numbe
69
  and semantic ties to the previous dialogues.
70
 
71
  ### Supported Tasks and Leaderboards
 
72
  `summarization`: The dataset can be used to train a model for abstractive summarization. A [fast abstractive summarization-RL](https://github.com/ChenRocks/fast_abs_rl) model was presented as a baseline, which achieves ROUGE-L-F1 of 25.18.
73
 
74
  ### Languages
@@ -79,13 +77,8 @@ The text in the dataset is in English, as spoken by actors on The Critical Role
79
 
80
  ### Data Instances
81
 
82
- #### default
83
-
84
- - **Size of downloaded dataset files:** 279.93 MB
85
- - **Size of the generated dataset:** 4020.33 MB
86
- - **Total amount of disk used:** 4300.25 MB
87
-
88
  An example of 'train' looks as follows.
 
89
  ```
90
  {
91
  "alignment_score": 3.679936647415161,
@@ -105,7 +98,6 @@ An example of 'train' looks as follows.
105
 
106
  The data fields are the same among all splits.
107
 
108
- #### default
109
  - `chunk`: a `string` feature.
110
  - `chunk_id`: a `int32` feature.
111
  - `turn_start`: a `int32` feature.
@@ -120,7 +112,7 @@ The data fields are the same among all splits.
120
 
121
  | name | train |validation| test |
122
  |-------|------:|---------:|------:|
123
- |default|26,232| 3,470|4,541|
124
 
125
  ## Dataset Creation
126
 
@@ -180,8 +172,7 @@ This work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 Inter
180
 
181
  ### Citation Information
182
 
183
- ```
184
-
185
  @inproceedings{
186
  title = {Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset},
187
  author = {Rameshkumar, Revanth and Bailey, Peter},
@@ -189,10 +180,8 @@ year = {2020},
189
  publisher = {Association for Computational Linguistics},
190
  conference = {ACL}
191
  }
192
-
193
  ```
194
 
195
-
196
  ### Contributions
197
 
198
  Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.
 
55
  - **Repository:** [CRD3 repository](https://github.com/RevanthRameshkumar/CRD3)
56
  - **Paper:** [Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset](https://www.aclweb.org/anthology/2020.acl-main.459/)
57
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
58
 
59
  ### Dataset Summary
60
 
 
66
  and semantic ties to the previous dialogues.
67
 
68
  ### Supported Tasks and Leaderboards
69
+
70
  `summarization`: The dataset can be used to train a model for abstractive summarization. A [fast abstractive summarization-RL](https://github.com/ChenRocks/fast_abs_rl) model was presented as a baseline, which achieves ROUGE-L-F1 of 25.18.
71
 
72
  ### Languages
 
77
 
78
  ### Data Instances
79
 
 
 
 
 
 
 
80
  An example of 'train' looks as follows.
81
+
82
  ```
83
  {
84
  "alignment_score": 3.679936647415161,
 
98
 
99
  The data fields are the same among all splits.
100
 
 
101
  - `chunk`: a `string` feature.
102
  - `chunk_id`: a `int32` feature.
103
  - `turn_start`: a `int32` feature.
 
112
 
113
  | name | train |validation| test |
114
  |-------|------:|---------:|------:|
115
+ |default|38,969| 6,327|7,500|
116
 
117
  ## Dataset Creation
118
 
 
172
 
173
  ### Citation Information
174
 
175
+ ```bibtex
 
176
  @inproceedings{
177
  title = {Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset},
178
  author = {Rameshkumar, Revanth and Bailey, Peter},
 
180
  publisher = {Association for Computational Linguistics},
181
  conference = {ACL}
182
  }
 
183
  ```
184
 
 
185
  ### Contributions
186
 
187
  Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.
crd3.py CHANGED
@@ -45,11 +45,11 @@ collaboration and spoken interaction. For each dialogue, there are a large numbe
45
  and semantic ties to the previous dialogues.
46
  """
47
 
48
- _URL = "https://github.com/RevanthRameshkumar/CRD3/archive/master.zip"
49
 
50
 
51
  def get_train_test_dev_files(files, test_split, train_split, dev_split):
52
- test_files = dev_files = train_files = []
53
  for file in files:
54
  filename = os.path.split(file)[1].split("_")[0]
55
  if filename in test_split:
@@ -88,10 +88,12 @@ class CRD3(datasets.GeneratorBasedBuilder):
88
  )
89
 
90
  def _split_generators(self, dl_manager):
91
- path = dl_manager.download_and_extract(_URL)
92
- test_file = os.path.join(path, "CRD3-master", "data", "aligned data", "test_files")
93
- train_file = os.path.join(path, "CRD3-master", "data", "aligned data", "train_files")
94
- dev_file = os.path.join(path, "CRD3-master", "data", "aligned data", "val_files")
 
 
95
  with open(test_file, encoding="utf-8") as f:
96
  test_splits = [file.replace("\n", "") for file in f.readlines()]
97
 
@@ -99,9 +101,9 @@ class CRD3(datasets.GeneratorBasedBuilder):
99
  train_splits = [file.replace("\n", "") for file in f.readlines()]
100
  with open(dev_file, encoding="utf-8") as f:
101
  dev_splits = [file.replace("\n", "") for file in f.readlines()]
102
- c2 = "CRD3-master/data/aligned data/c=2"
103
- c3 = "CRD3-master/data/aligned data/c=3"
104
- c4 = "CRD3-master/data/aligned data/c=4"
105
  files = [os.path.join(path, c2, file) for file in sorted(os.listdir(os.path.join(path, c2)))]
106
  files.extend([os.path.join(path, c3, file) for file in sorted(os.listdir(os.path.join(path, c3)))])
107
  files.extend([os.path.join(path, c4, file) for file in sorted(os.listdir(os.path.join(path, c4)))])
 
45
  and semantic ties to the previous dialogues.
46
  """
47
 
48
+ _URL = "https://huggingface.co/datasets/crd3/resolve/72bffe55b4d5bf19b530d3e417447b3384ba3673/data/aligned%20data.zip"
49
 
50
 
51
  def get_train_test_dev_files(files, test_split, train_split, dev_split):
52
+ test_files, dev_files, train_files = [], [], []
53
  for file in files:
54
  filename = os.path.split(file)[1].split("_")[0]
55
  if filename in test_split:
 
88
  )
89
 
90
  def _split_generators(self, dl_manager):
91
+ root = dl_manager.download_and_extract(_URL)
92
+ path = os.path.join(root, "aligned data")
93
+
94
+ test_file = os.path.join(path, "test_files")
95
+ train_file = os.path.join(path, "train_files")
96
+ dev_file = os.path.join(path, "val_files")
97
  with open(test_file, encoding="utf-8") as f:
98
  test_splits = [file.replace("\n", "") for file in f.readlines()]
99
 
 
101
  train_splits = [file.replace("\n", "") for file in f.readlines()]
102
  with open(dev_file, encoding="utf-8") as f:
103
  dev_splits = [file.replace("\n", "") for file in f.readlines()]
104
+ c2 = "c=2"
105
+ c3 = "c=3"
106
+ c4 = "c=4"
107
  files = [os.path.join(path, c2, file) for file in sorted(os.listdir(os.path.join(path, c2)))]
108
  files.extend([os.path.join(path, c3, file) for file in sorted(os.listdir(os.path.join(path, c3)))])
109
  files.extend([os.path.join(path, c4, file) for file in sorted(os.listdir(os.path.join(path, c4)))])
dataset_infos.json CHANGED
@@ -1 +1 @@
1
- {"default": {"description": "\nStorytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset.\nCritical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game.\nThe dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding\nabstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player\ncollaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail,\nand semantic ties to the previous dialogues.\n", "citation": "\n@inproceedings{\ntitle = {Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset},\nauthor = {Rameshkumar, Revanth and Bailey, Peter},\nyear = {2020},\npublisher = {Association for Computational Linguistics},\nconference = {ACL}\n}\n ", "homepage": "https://github.com/RevanthRameshkumar/CRD3", "license": "", "features": {"chunk": {"dtype": "string", "id": null, "_type": "Value"}, "chunk_id": {"dtype": "int32", "id": null, "_type": "Value"}, "turn_start": {"dtype": "int32", "id": null, "_type": "Value"}, "turn_end": {"dtype": "int32", "id": null, "_type": "Value"}, "alignment_score": {"dtype": "float32", "id": null, "_type": "Value"}, "turns": {"feature": {"names": {"dtype": "string", "id": null, "_type": "Value"}, "utterances": {"dtype": "string", "id": null, "_type": "Value"}, "number": {"dtype": "int32", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "crd3", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 318560673, "num_examples": 52796, "dataset_name": "crd3"}, "test": {"name": "test", "num_bytes": 318560673, "num_examples": 52796, "dataset_name": "crd3"}, "validation": {"name": "validation", "num_bytes": 318560673, "num_examples": 52796, "dataset_name": "crd3"}}, "download_checksums": {"https://github.com/RevanthRameshkumar/CRD3/archive/master.zip": {"num_bytes": 294222220, "checksum": "c77a937394f265735ba54b32a7a051f77a97d264c74b0535dee77ef9791815b5"}}, "download_size": 294222220, "post_processing_size": null, "dataset_size": 955682019, "size_in_bytes": 1249904239}}
 
1
+ {"default": {"description": "\nStorytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset.\nCritical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game.\nThe dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding\nabstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player\ncollaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail,\nand semantic ties to the previous dialogues.\n", "citation": "\n@inproceedings{\ntitle = {Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset},\nauthor = {Rameshkumar, Revanth and Bailey, Peter},\nyear = {2020},\npublisher = {Association for Computational Linguistics},\nconference = {ACL}\n}\n ", "homepage": "https://github.com/RevanthRameshkumar/CRD3", "license": "", "features": {"chunk": {"dtype": "string", "id": null, "_type": "Value"}, "chunk_id": {"dtype": "int32", "id": null, "_type": "Value"}, "turn_start": {"dtype": "int32", "id": null, "_type": "Value"}, "turn_end": {"dtype": "int32", "id": null, "_type": "Value"}, "alignment_score": {"dtype": "float32", "id": null, "_type": "Value"}, "turns": [{"names": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "utterances": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "number": {"dtype": "int32", "id": null, "_type": "Value"}}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "crd3", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 236605152, "num_examples": 38969, "dataset_name": "crd3"}, "test": {"name": "test", "num_bytes": 40269203, "num_examples": 7500, "dataset_name": "crd3"}, "validation": {"name": "validation", "num_bytes": 41543528, "num_examples": 6327, "dataset_name": "crd3"}}, "download_checksums": {"https://huggingface.co/datasets/crd3/resolve/72bffe55b4d5bf19b530d3e417447b3384ba3673/data/aligned%20data.zip": {"num_bytes": 117519820, "checksum": "c66bd9f7848bcd514a35c154edd2fc874f1a3076876d8bd7208bf3caf4b7fb0b"}}, "download_size": 117519820, "post_processing_size": null, "dataset_size": 318417883, "size_in_bytes": 435937703}}
dummy/0.0.0/dummy_data.zip CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:015b912d23db6bd0c4910dc4d4abd455b780e35f55199dd2359c7a7cf24a5157
3
- size 21265
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b58147635d59ebacc64e62d0d8855902cb14b447e28acb7794b97c48ea35cef
3
+ size 22306