Datasets:
File size: 1,599 Bytes
a7c1efe 2cc88c1 a7c1efe 1ac081b a7c1efe |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
language:
- en
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- conditional-text-generation
task_ids:
- summarization
---
# GovReport dataset for summarization
Dataset for summarization of long documents.\
Adapted from this [repo](https://github.com/luyang-huang96/LongDocSum) and this [paper](https://arxiv.org/pdf/2104.02112.pdf)\
This dataset is compatible with the [`run_summarization.py`](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) script from Transformers if you add this line to the `summarization_name_mapping` variable:
```python
"ccdv/govreport-summarization": ("report", "summary")
```
### Data Fields
- `id`: paper id
- `report`: a string containing the body of the report
- `summary`: a string containing the summary of the report
### Data Splits
This dataset has 3 splits: _train_, _validation_, and _test_. \
Token counts with a RoBERTa tokenizer.
| Dataset Split | Number of Instances | Avg. tokens |
| ------------- | --------------------|:----------------------|
| Train | 17,517 | < 9,000 / < 500 |
| Validation | 973 | < 9,000 / < 500 |
| Test | 973 | < 9,000 / < 500 |
# Cite original article
```
@misc{huang2021efficient,
title={Efficient Attentions for Long Document Summarization},
author={Luyang Huang and Shuyang Cao and Nikolaus Parulian and Heng Ji and Lu Wang},
year={2021},
eprint={2104.02112},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|