Datasets:
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators: []
|
3 |
+
language_creators:
|
4 |
+
- crowdsourced
|
5 |
+
- expert-generated
|
6 |
+
language:
|
7 |
+
- code
|
8 |
+
license:
|
9 |
+
- mit
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
size_categories:
|
13 |
+
- unknown
|
14 |
+
source_datasets:
|
15 |
+
- original
|
16 |
+
task_categories:
|
17 |
+
- text2text-generation
|
18 |
+
task_ids: []
|
19 |
+
pretty_name: DocPrompting-CoNaLa
|
20 |
+
tags:
|
21 |
+
- code-generation
|
22 |
+
- doc retrieval
|
23 |
+
- retrieval augmented generation
|
24 |
+
---
|
25 |
+
|
26 |
+
## Dataset Description
|
27 |
+
- **Repository:** https://github.com/shuyanzhou/docprompting
|
28 |
+
- **Paper:** [DocPrompting: Generating Code by Retrieving the Docs](https://arxiv.org/pdf/2207.05987.pdf)
|
29 |
+
|
30 |
+
### Dataset Summary
|
31 |
+
This is the re-split of [CoNaLa](https://conala-corpus.github.io/) dataset.
|
32 |
+
For each code snippet in the dev and test set, at least one function is held out from the training set.
|
33 |
+
This split aims at testing a code generation model's capacity in generating *unseen* functions
|
34 |
+
We further make sure that examples from the same StackOverflow post (same `question_id` before `-`) are in the same split.
|
35 |
+
|
36 |
+
### Supported Tasks and Leaderboards
|
37 |
+
This dataset is used to evaluate code generations.
|
38 |
+
|
39 |
+
### Languages
|
40 |
+
English - Python code.
|
41 |
+
|
42 |
+
## Dataset Structure
|
43 |
+
```python
|
44 |
+
dataset = load_dataset("neulab/docpromting-conala")
|
45 |
+
DatasetDict({
|
46 |
+
train: Dataset({
|
47 |
+
features: ['nl', 'cmd', 'question_id', 'cmd_name', 'oracle_man', 'canonical_cmd'],
|
48 |
+
num_rows: 2135
|
49 |
+
})
|
50 |
+
test: Dataset({
|
51 |
+
features: ['nl', 'cmd', 'question_id', 'cmd_name', 'oracle_man', 'canonical_cmd'],
|
52 |
+
num_rows: 543
|
53 |
+
})
|
54 |
+
validation: Dataset({
|
55 |
+
features: ['nl', 'cmd', 'question_id', 'cmd_name', 'oracle_man', 'canonical_cmd'],
|
56 |
+
num_rows: 201
|
57 |
+
})
|
58 |
+
})
|
59 |
+
})
|
60 |
+
|
61 |
+
code_docs = load_dataset("neulab/docprompting-conala", "docs")
|
62 |
+
DatasetDict({
|
63 |
+
train: Dataset({
|
64 |
+
features: ['doc_id', 'doc_content'],
|
65 |
+
num_rows: 34003
|
66 |
+
})
|
67 |
+
})
|
68 |
+
```
|
69 |
+
|
70 |
+
### Data Fields
|
71 |
+
train/dev/test:
|
72 |
+
- nl: The natural language intent
|
73 |
+
- cmd: The reference code snippet
|
74 |
+
- question_id: `x-y`where `x` is the StackOverflow post ID
|
75 |
+
- oracle_man: The `doc_id` of the functions used in the reference code snippet. The corresponding contents are in `doc` split
|
76 |
+
- canonical_cmd: The canonical version reference code snippet
|
77 |
+
|
78 |
+
|
79 |
+
docs:
|
80 |
+
- doc_id: the id of a doc
|
81 |
+
- doc_content: the content of the doc
|
82 |
+
|
83 |
+
## Dataset Creation
|
84 |
+
The dataset was crawled from Stack Overflow, automatically filtered, then curated by annotators. For more details, please refer to the original [paper](https://arxiv.org/pdf/1805.08949.pdf)
|
85 |
+
|
86 |
+
### Citation Information
|
87 |
+
|
88 |
+
```
|
89 |
+
@article{zhou2022doccoder,
|
90 |
+
title={DocCoder: Generating Code by Retrieving and Reading Docs},
|
91 |
+
author={Zhou, Shuyan and Alon, Uri and Xu, Frank F and JIang, Zhengbao and Neubig, Graham},
|
92 |
+
journal={arXiv preprint arXiv:2207.05987},
|
93 |
+
year={2022}
|
94 |
+
}
|
95 |
+
```
|