Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
code
Size:
10K - 100K
ArXiv:
Tags:
code
License:
File size: 1,995 Bytes
b11a09e 9b289a7 e9f1de2 b11a09e 9b289a7 3430665 9b289a7 3430665 9b289a7 b4f56e0 067b4c8 b4f56e0 067b4c8 b4f56e0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
dataset_info:
features:
- name: commit
dtype: string
- name: old_file
dtype: string
- name: new_file
dtype: string
- name: old_contents
dtype: string
- name: new_contents
dtype: string
- name: subject
dtype: string
- name: message
dtype: string
- name: lang
dtype: string
- name: license
dtype: string
- name: repos
dtype: string
- name: ndiff
dtype: string
- name: instruction
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 113752028
num_examples: 22602
download_size: 48124127
dataset_size: 113752028
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
tags:
- code
license: mit
pretty_name: CanItEdit
language:
- code
---
# EditPackFT
EditPackFT is a dataset built for training LLMs on the task of instructional code editing. The mail columns are:
1. `old_contents` the code before the edit
2. `instruction` the instruction to transform the `before` code into the `after` code
3. `new_contents` the code after the edit
4. `content` a pre-formatted training window that can be used to train an LLM with prompts in the format of: `<before><instruction><after>`
This dataset has been filtered from CommitPackFT. For more detail, [see our paper](https://arxiv.org/abs/2312.12450), and our [GitHub repository](https://github.com/nuprl/CanItEdit/tree/main/editpackft).
## Citation
If you use our work, please cite our paper as such:
```
@inproceedings{cassano2023edit,
title={{Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions}},
author={Federico Cassano and Luisa Li and Akul Sethi and Noah Shinn and Abby Brennan-Jones and Anton Lozhkov and Carolyn Jane Anderson and Arjun Guha},
booktitle={The First International Workshop on Large Language Model for Code},
year={2024},
url={https://arxiv.org/abs/2312.12450}
}
``` |