language:
- code
- en
multilinguality:
- multiprogramming languages
task_categories:
- text-generation
license: mit
dataset_info:
features:
- name: identifier
dtype: string
- name: return_type
dtype: string
- name: repo
dtype: string
- name: path
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
dtype: string
- name: original_docstring
dtype: string
- name: comment
dtype: string
- name: docstring_tokens
dtype: string
- name: docstring
dtype: string
- name: original_string
dtype: string
pretty_name: The Vault Function
viewer: false
Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Dataset Statistics
- Usage
- Additional Information
Dataset Description
- Repository: FSoft-AI4Code/TheVault
- Paper: The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation
- Contact: [email protected]
- Website: https://www.fpt-aicenter.com/ai-residency/
Dataset Summary
The Vault dataset is a comprehensive, large-scale, multilingual parallel dataset that features high-quality code-text pairs derived from The Stack, the largest permissively-licensed source code dataset.
We provide The Vault which contains code snippets from 10 popular programming languages such as Java, JavaScript, Python, Ruby, Rust, Golang, C#, C++, C, and PHP. This dataset provides multiple code-snippet levels, metadata, and 11 docstring styles for enhanced usability and versatility.
Supported Tasks
The Vault can be used for pretraining LLMs or downstream code-text interaction tasks. A number of tasks related to code understanding and geneartion can be constructed using The Vault such as code summarization, text-to-code generation and code search.
Languages
The natural language text (docstring) is in English.
10 programming languages are supported in The Vault: Python
, Java
, JavaScript
, PHP
, C
, C#
, C++
, Go
, Ruby
, Rust
Dataset Structure
Data Instances
{
"hexsha": "ee1cf38808d3db0ea364b049509a01a65e6e5589",
"repo": "Waguy02/Boomer-Scripted",
"path": "python/subprojects/testbed/mlrl/testbed/persistence.py",
"license": [
"MIT"
],
"language": "Python",
"identifier": "__init__",
"code": "def __init__(self, model_dir: str):\n \"\"\"\n :param model_dir: The path of the directory where models should be saved\n \"\"\"\n self.model_dir = model_dir",
"code_tokens": [
"def",
"__init__",
"(",
"self",
",",
"model_dir",
":",
"str",
")",
":",
"\"\"\"\n :param model_dir: The path of the directory where models should be saved\n \"\"\"",
"self",
".",
"model_dir",
"=",
"model_dir"
],
"original_comment": "\"\"\"\n :param model_dir: The path of the directory where models should be saved\n \"\"\"",
"comment": ":param model_dir: The path of the directory where models should be saved",
"comment_tokens": [
":",
"param",
"model_dir",
":",
"The",
"path",
"of",
"the",
"directory",
"where",
"models",
"should",
"be",
"saved"
],
"start_point": [
1,
8
],
"end_point": [
3,
11
],
"prev_context": {
"code": null,
"start_point": null,
"end_point": null
},
"next_context": {
"code": "self.model_dir = model_dir",
"start_point": [
4,
8
],
"end_point": [
4,
34
]
}
}
Data Fields
Data fields for function level:
- hexsha (string): the unique git hash of file
- repo (string): the owner/repo
- path (string): the full path to the original file
- license (list): licenses in the repo
- language (string): the programming language
- identifier (string): the function or method name
- code (string): the part of the original that is code
- code_tokens (list): tokenized version of
code
- original_comment (string): original text of comment ,
- comment (string): clean version of comment,
- comment_tokens (list): tokenized version of
comment
, - start_point (int): start position of
original_comment
incode
, - end_point (int): end position of
original_comment
incode
, - prev_context (dict): block of code before
original_comment
, - next_context (dict): block of code after
original_comment
Data Splits
In this repo, the inline level data is not split, and contain in only train set.
Dataset Statistics
Languages | Number of inline comments |
---|---|
Python | 14,013,238 |
Java | 17,062,277 |
JavaScript | 1,438,110 |
PHP | 5,873,744 |
C | 6,778,239 |
C# | 6,274,389 |
C++ | 10,343,650 |
Go | 4,390,342 |
Ruby | 767,563 |
Rust | 2,063,784 |
TOTAL | 69,005,336 |
Usage
You can load The Vault dataset using datasets library: pip install datasets
from datasets import load_dataset
# Load full function level dataset (40M samples)
dataset = load_dataset("Fsoft-AIC/the-vault-inline")
# specific language (e.g. Python)
dataset = load_dataset("Fsoft-AIC/the-vault-inline", languages=['Python'])
# dataset streaming
data = load_dataset("Fsoft-AIC/the-vault-inline", streaming= True)
for sample in iter(data['train']):
print(sample)
A back up dataset can be downloaded in azure storage. See Download The Vault from Azure blob storage.
Additional information
Licensing Information
MIT License
Citation Information
@article{manh2023vault,
title={The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation},
author={Manh, Dung Nguyen and Hai, Nam Le and Dau, Anh TV and Nguyen, Anh Minh and Nghiem, Khanh and Guo, Jin and Bui, Nghi DQ},
journal={arXiv preprint arXiv:2305.06156},
year={2023}
}
Contributions
This dataset is developed by FSOFT AI4Code team.