NavidVafaei commited on
Commit
bd8967f
1 Parent(s): 56878bb

Upload 3 files

Browse files
Files changed (3) hide show
  1. README.md +206 -0
  2. dataset_infos.json +1 -0
  3. rottento.py +109 -0
README.md ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ language:
7
+ - en
8
+ license:
9
+ - cc-by-nc-nd-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - summarization
18
+ task_ids: []
19
+ paperswithcode_id: rottento
20
+ pretty_name: rottento Corpus
21
+ tags:
22
+ - conversations-summarization
23
+ dataset_info:
24
+ features:
25
+ - name: movie
26
+ dtype: string
27
+ - name: id
28
+ dtype: string
29
+ - name: reviews
30
+ dtype: array
31
+ - name: summary
32
+ dtype: string
33
+ config_name: rottento
34
+ splits:
35
+ - name: train
36
+ num_bytes: 9479141
37
+ num_examples: 14732
38
+ - name: test
39
+ num_bytes: 534492
40
+ num_examples: 819
41
+ - name: validation
42
+ num_bytes: 516431
43
+ num_examples: 818
44
+ download_size: 2944100
45
+ dataset_size: 10530064
46
+ train-eval-index:
47
+ - config: rottento
48
+ task: summarization
49
+ task_id: summarization
50
+ splits:
51
+ eval_split: test
52
+ col_mapping:
53
+ dialogue: text
54
+ summary: target
55
+ ---
56
+
57
+ # Dataset Card for rottentoCorpus
58
+
59
+ ## Table of Contents
60
+ - [Dataset Description](#dataset-description)
61
+ - [Dataset Summary](#dataset-summary)
62
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
63
+ - [Languages](#languages)
64
+ - [Dataset Structure](#dataset-structure)
65
+ - [Data Instances](#data-instances)
66
+ - [Data Fields](#data-fields)
67
+ - [Data Splits](#data-splits)
68
+ - [Dataset Creation](#dataset-creation)
69
+ - [Curation Rationale](#curation-rationale)
70
+ - [Source Data](#source-data)
71
+ - [Annotations](#annotations)
72
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
73
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
74
+ - [Social Impact of Dataset](#social-impact-of-dataset)
75
+ - [Discussion of Biases](#discussion-of-biases)
76
+ - [Other Known Limitations](#other-known-limitations)
77
+ - [Additional Information](#additional-information)
78
+ - [Dataset Curators](#dataset-curators)
79
+ - [Licensing Information](#licensing-information)
80
+ - [Citation Information](#citation-information)
81
+ - [Contributions](#contributions)
82
+
83
+ ## Dataset Description
84
+
85
+ - **Homepage:** https://arxiv.org/abs/1911.12237v2
86
+ - **Repository:** [Needs More Information]
87
+ - **Paper:** https://arxiv.org/abs/1911.12237v2
88
+ - **Leaderboard:** [Needs More Information]
89
+ - **Point of Contact:** [Needs More Information]
90
+
91
+ ### Dataset Summary
92
+
93
+ The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger convesations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.
94
+ The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).
95
+
96
+ ### Supported Tasks and Leaderboards
97
+
98
+ [Needs More Information]
99
+
100
+ ### Languages
101
+
102
+ English
103
+
104
+ ## Dataset Structure
105
+
106
+ ### Data Instances
107
+
108
+ The created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people
109
+
110
+ The first instance in the training set:
111
+ {'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"}
112
+
113
+ ### Data Fields
114
+
115
+ - dialogue: text of dialogue.
116
+ - summary: human written summary of the dialogue.
117
+ - id: unique id of an example.
118
+
119
+ ### Data Splits
120
+
121
+ - train: 14732
122
+ - val: 818
123
+ - test: 819
124
+
125
+
126
+ ## Dataset Creation
127
+
128
+ ### Curation Rationale
129
+
130
+ In paper:
131
+ > In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typ- ically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assis- tant and a client buying petrol.
132
+ As a consequence, we decided to create a chat dialogue dataset by constructing such conversa- tions that would epitomize the style of a messenger app.
133
+
134
+ ### Source Data
135
+
136
+ #### Initial Data Collection and Normalization
137
+
138
+ In paper:
139
+ > We asked linguists to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. It includes chit-chats, gossiping about friends, arranging meetings, discussing politics, consulting university assignments with colleagues, etc. Therefore, this dataset does not contain any sensitive data or fragments of other corpora.
140
+
141
+ #### Who are the source language producers?
142
+
143
+ linguists
144
+
145
+ ### Annotations
146
+
147
+ #### Annotation process
148
+
149
+ In paper:
150
+ > Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one ref- erence summary.
151
+
152
+ #### Who are the annotators?
153
+
154
+ language experts
155
+
156
+ ### Personal and Sensitive Information
157
+
158
+ None, see above: Initial Data Collection and Normalization
159
+
160
+ ## Considerations for Using the Data
161
+
162
+ ### Social Impact of Dataset
163
+
164
+ [Needs More Information]
165
+
166
+ ### Discussion of Biases
167
+
168
+ [Needs More Information]
169
+
170
+ ### Other Known Limitations
171
+
172
+ [Needs More Information]
173
+
174
+ ## Additional Information
175
+
176
+ ### Dataset Curators
177
+
178
+ [Needs More Information]
179
+
180
+ ### Licensing Information
181
+
182
+ non-commercial licence: CC BY-NC-ND 4.0
183
+
184
+ ### Citation Information
185
+
186
+ ```
187
+ @inproceedings{gliwa-etal-2019-samsum,
188
+ title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization",
189
+ author = "Gliwa, Bogdan and
190
+ Mochol, Iwona and
191
+ Biesek, Maciej and
192
+ Wawer, Aleksander",
193
+ booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
194
+ month = nov,
195
+ year = "2019",
196
+ address = "Hong Kong, China",
197
+ publisher = "Association for Computational Linguistics",
198
+ url = "https://www.aclweb.org/anthology/D19-5409",
199
+ doi = "10.18653/v1/D19-5409",
200
+ pages = "70--79"
201
+ }
202
+ ```
203
+
204
+ ### Contributions
205
+
206
+ Thanks to [@cccntu](https://github.com/cccntu) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"rottento": {"description": "\nRottento Corpus contains films reviwes with golden annotated\nsummaries.\nThere are four features:\n - movie: name of movie.\n - reviws: list of reviews.\n - summary: written summary of the reviews.\n - id: id of a example.\n", "citation": "\n@article{-,\n title={Summarization},\n author={-},\n journal={-},\n year={2023}\n}\n", "homepage": "-", "license": "CC BY-NC-ND 4.0", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "reviews": {"dtype": "array", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}}, builder_name": "rottento", "config_name": "rottento", "version": "0.0.0", "splits": {"train": {"name": "train", "num_bytes": 9479141, "num_examples": 14732, "dataset_name": "rottento"}, "test": {"name": "test", "num_bytes": 534492, "num_examples": 819, "dataset_name": "rottento"}, "validation": {"name": "validation", "num_bytes": 516431, "num_examples": 818, "dataset_name": "rottento"}}, "download_checksums": {"https://huggingface.co/datasets/NavidVafaei/rottentomato01/tree/main/data/corpus.7z": {"num_bytes": 2944100, "checksum": "a97674c66726f66b98a08ca5e8868fb8af9d4843f2b05c4f839bc5cfe91e8899"}}, "download_size": 2944100, "post_processing_size": null, "dataset_size": 10530064, "size_in_bytes": 13474164}}
rottento.py ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Rottento dataset."""
16
+
17
+
18
+ import json
19
+
20
+ import py7zr
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """
26
+ @article{gliwa2019samsum,
27
+ title=rottento Corpus: Dataset for Abstractive Summarization},
28
+ author={-},
29
+ journal={-},
30
+ year={2023}
31
+ }
32
+ """
33
+
34
+ _DESCRIPTION = """
35
+ rottento Corpus contains annotated
36
+ summaries.
37
+ """
38
+
39
+ _HOMEPAGE = "-"
40
+
41
+ _LICENSE = "-"
42
+
43
+ _URL = "https://huggingface.co/datasets/NavidVafaei/rottentomato01/tree/main/data/corpus.7z"
44
+
45
+
46
+ class Samsum(datasets.GeneratorBasedBuilder):
47
+ """rottento Corpus dataset."""
48
+
49
+ VERSION = datasets.Version("1.1.0")
50
+
51
+ BUILDER_CONFIGS = [
52
+ datasets.BuilderConfig(name="Rottento"),
53
+ ]
54
+
55
+ def _info(self):
56
+ features = datasets.Features(
57
+ {
58
+ "id": datasets.Value("string"),
59
+ "movie": datasets.Value("string"),
60
+ "reviews": datasets.Value("array"),
61
+ "summary": datasets.Value("string"),
62
+ }
63
+ )
64
+ return datasets.DatasetInfo(
65
+ description=_DESCRIPTION,
66
+ features=features,
67
+ supervised_keys=None,
68
+ homepage=_HOMEPAGE,
69
+ license=_LICENSE,
70
+ citation=_CITATION,
71
+ )
72
+
73
+ def _split_generators(self, dl_manager):
74
+ """Returns SplitGenerators."""
75
+ path = dl_manager.download(_URL)
76
+ return [
77
+ datasets.SplitGenerator(
78
+ name=datasets.Split.TRAIN,
79
+ gen_kwargs={
80
+ "filepath": (path, "train.json"),
81
+ "split": "train",
82
+ },
83
+ ),
84
+ datasets.SplitGenerator(
85
+ name=datasets.Split.TEST,
86
+ gen_kwargs={
87
+ "filepath": (path, "test.json"),
88
+ "split": "test",
89
+ },
90
+ ),
91
+ datasets.SplitGenerator(
92
+ name=datasets.Split.VALIDATION,
93
+ gen_kwargs={
94
+ "filepath": (path, "val.json"),
95
+ "split": "val",
96
+ },
97
+ ),
98
+ ]
99
+
100
+ def _generate_examples(self, filepath, split):
101
+ """Yields examples."""
102
+ path, fname = filepath
103
+ with open(path, "rb") as f:
104
+ with py7zr.SevenZipFile(f, "r") as z:
105
+ for name, bio in z.readall().items():
106
+ if name == fname:
107
+ data = json.load(bio)
108
+ for example in data:
109
+ yield example["id"], example