--- language: - code - en multilinguality: - multiprogramming languages task_categories: - text-generation license: mit dataset_info: features: - name: identifier dtype: string - name: repo dtype: string - name: path dtype: string - name: language dtype: string - name: code dtype: string - name: code_tokens dtype: string - name: original_docstring dtype: string - name: comment dtype: string - name: docstring_tokens dtype: string - name: docstring dtype: string - name: original_string dtype: string pretty_name: The Vault Function viewer: true --- ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Statistics](#dataset-statistics) - [Usage](#usage) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [FSoft-AI4Code/TheVault](https://github.com/FSoft-AI4Code/TheVault) - **Paper:** [The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation](https://arxiv.org/abs/2305.06156) - **Contact:** support.ailab@fpt.com - **Website:** https://www.fpt-aicenter.com/ai-residency/

logo

# The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation
## Dataset Summary The Vault dataset is a comprehensive, large-scale, multilingual parallel dataset that features high-quality code-text pairs derived from The Stack, the largest permissively-licensed source code dataset. We provide The Vault which contains code snippets from 10 popular programming languages such as Java, JavaScript, Python, Ruby, Rust, Golang, C#, C++, C, and PHP. This dataset provides multiple code-snippet levels, metadata, and 11 docstring styles for enhanced usability and versatility. ## Supported Tasks The Vault can be used for pretraining LLMs or downstream code-text interaction tasks. A number of tasks related to code understanding and geneartion can be constructed using The Vault such as *code summarization*, *text-to-code generation* and *code search*. ## Languages The natural language text (docstring) is in English. 10 programming languages are supported in The Vault: `Python`, `Java`, `JavaScript`, `PHP`, `C`, `C#`, `C++`, `Go`, `Ruby`, `Rust` *Note: C and Go are not contained in this repo due to the nonexistence of traditional classes in these languages.* ## Dataset Structure ### Data Instances ``` { "hexsha": "78b961a6673ec1e12f8d95c33ef081f75561a87c", "repo": "AIS-Bonn/sl-cutscenes", "path": "sl_cutscenes/object_models.py", "license": [ "MIT" ], "language": "Python", "identifier": "MeshLoader", "original_docstring": "\n Class to load the meshes for the objects in a scene.\n ", "docstring": "Class to load the meshes for the objects in a scene.", "docstring_tokens": [ "Class", "to", "load", "the", "meshes", "for", "the", "objects", "in", "a", "scene", "." ], "code": "class MeshLoader:\n \"\"\"\n Class to load the meshes for the objects in a scene.\n \"\"\"\n\n def __init__(self):\n \"\"\"Module initializer\"\"\"\n self.base_dir = CONSTANTS.MESH_BASE_DIR\n self.text_dir = CONSTANTS.TEXT_BASE_DIR\n self.reset()\n\n def reset(self):\n self.loaded_meshes = []\n\n def get_meshes(self):\n \"\"\" \"\"\"\n extract_singular = lambda x: x[0] if len(x) == 1 else x\n return [extract_singular(item) for item in self.loaded_meshes]\n\n def load_meshes(self, obj_info: List[object_info.ObjectInfo], **kwargs):\n \"\"\"\n Loads the meshes whose information is given in parameter 'obj_info.\n Each call of this method APPENDS a list to the loaded_meshes attribute.\n :param obj_info: The object information of the meshes to be loaded.\n :param kwargs: additional mesh modifiers such as scale, specified with a leading 'mod_'\n \"\"\"\n paths = []\n for obj in obj_info:\n path = self.text_dir if obj.name.endswith(\"_floor\") or obj.name.endswith(\"_wall\") else self.base_dir\n paths.append((path / obj.mesh_fp).resolve())\n scales = [obj.scale for obj in obj_info]\n class_ids = [obj.class_id for obj in obj_info]\n mod_scales = kwargs.get(\"mod_scale\", [1.0] * len(scales))\n scales = [s * ms for (s, ms) in zip(scales, mod_scales)]\n flags = [mesh_flags(obj) for obj in obj_info]\n meshes = sl.Mesh.load_threaded(filenames=paths, flags=flags)\n\n # Setup class IDs\n for _, (mesh, scale, class_id) in enumerate(zip(meshes, scales, class_ids)):\n pt = torch.eye(4)\n pt[:3, :3] *= scale\n mesh.pretransform = pt\n mesh.class_index = class_id\n\n info_mesh_tuples = list(zip(obj_info, meshes))\n self.loaded_meshes.append(info_mesh_tuples)", "code_tokens": [ "class", "MeshLoader", ":", "def", "__init__", "(", "self", ")", ":", "\"\"\"Module initializer\"\"\"", "self", ".", "base_dir", "=", "CONSTANTS", ".", "MESH_BASE_DIR", "self", ".", "text_dir", "=", "CONSTANTS", ".", "TEXT_BASE_DIR", "self", ".", "reset", "(", ")", "def", "reset", "(", "self", ")", ":", "self", ".", "loaded_meshes", "=", "[", "]", "def", "get_meshes", "(", "self", ")", ":", "\"\"\" \"\"\"", "extract_singular", "=", "lambda", "x", ":", "x", "[", "0", "]", "if", "len", "(", "x", ")", "==", "1", "else", "x", "return", "[", "extract_singular", "(", "item", ")", "for", "item", "in", "self", ".", "loaded_meshes", "]", "def", "load_meshes", "(", "self", ",", "obj_info", ":", "List", "[", "object_info", ".", "ObjectInfo", "]", ",", "**", "kwargs", ")", ":", "\"\"\"\n Loads the meshes whose information is given in parameter 'obj_info.\n Each call of this method APPENDS a list to the loaded_meshes attribute.\n :param obj_info: The object information of the meshes to be loaded.\n :param kwargs: additional mesh modifiers such as scale, specified with a leading 'mod_'\n \"\"\"", "paths", "=", "[", "]", "for", "obj", "in", "obj_info", ":", "path", "=", "self", ".", "text_dir", "if", "obj", ".", "name", ".", "endswith", "(", "\"_floor\"", ")", "or", "obj", ".", "name", ".", "endswith", "(", "\"_wall\"", ")", "else", "self", ".", "base_dir", "paths", ".", "append", "(", "(", "path", "/", "obj", ".", "mesh_fp", ")", ".", "resolve", "(", ")", ")", "scales", "=", "[", "obj", ".", "scale", "for", "obj", "in", "obj_info", "]", "class_ids", "=", "[", "obj", ".", "class_id", "for", "obj", "in", "obj_info", "]", "mod_scales", "=", "kwargs", ".", "get", "(", "\"mod_scale\"", ",", "[", "1.0", "]", "*", "len", "(", "scales", ")", ")", "scales", "=", "[", "s", "*", "ms", "for", "(", "s", ",", "ms", ")", "in", "zip", "(", "scales", ",", "mod_scales", ")", "]", "flags", "=", "[", "mesh_flags", "(", "obj", ")", "for", "obj", "in", "obj_info", "]", "meshes", "=", "sl", ".", "Mesh", ".", "load_threaded", "(", "filenames", "=", "paths", ",", "flags", "=", "flags", ")", "for", "_", ",", "(", "mesh", ",", "scale", ",", "class_id", ")", "in", "enumerate", "(", "zip", "(", "meshes", ",", "scales", ",", "class_ids", ")", ")", ":", "pt", "=", "torch", ".", "eye", "(", "4", ")", "pt", "[", ":", "3", ",", ":", "3", "]", "*=", "scale", "mesh", ".", "pretransform", "=", "pt", "mesh", ".", "class_index", "=", "class_id", "info_mesh_tuples", "=", "list", "(", "zip", "(", "obj_info", ",", "meshes", ")", ")", "self", ".", "loaded_meshes", ".", "append", "(", "info_mesh_tuples", ")" ], "short_docstring": "Class to load the meshes for the objects in a scene.", "short_docstring_tokens": [ "Class", "to", "load", "the", "meshes", "for", "the", "objects", "in", "a", "scene", "." ], "comment": [ "\"\"\"\n Class to load the meshes for the objects in a scene.\n \"\"\"", "\"\"\"Module initializer\"\"\"", "\"\"\" \"\"\"", "\"\"\"\n Loads the meshes whose information is given in parameter 'obj_info.\n Each call of this method APPENDS a list to the loaded_meshes attribute.\n :param obj_info: The object information of the meshes to be loaded.\n :param kwargs: additional mesh modifiers such as scale, specified with a leading 'mod_'\n \"\"\"", "# Setup class IDs" ], "parameters": [], "docstring_params": { "returns": [], "raises": [], "params": [], "outlier_params": [], "others": [] } } ``` ### Data Fields Data fields for function level: - **hexsha** (string): the unique git hash of file - **repo** (string): the owner/repo - **path** (string): the full path to the original file - **license** (list): licenses in the repo - **language** (string): the programming language - **identifier** (string): the function or method name - **original_string** (string): original version of function/class node - **original_docstring** (string): the raw string before tokenization or parsing - **code** (string): the part of the original that is code - **code_tokens** (list): tokenized version of `code` - **short_docstring** (string): short, brief summarization (first line of the docstring) - **short_docstring_tokens** (list): tokenized version of `short_docstring - **docstring** (string): the top-level comment or docstring (docstring version without param’s doc, return, exception fields, etc) - **docstring_tokens** (list): tokenized version of docstring - **comment** (list): list of comments (line) inside the function/class - **parameters** (list): List of parameters and its type (type can be None) - **docstring_params** (dict): Dictionary of the parsed information from docstring See [here](https://github.com/FSoft-AI4Code/TheVault/blob/main/data/README.md) for more details and examples. ### Data Splits In this repo, the class level data is not split, and contained in only train set. ## Dataset Statistics |Language | Number of samples | |:-----------|------------------------:| |Python | 422,187 | |Java | 4,872,485 | |JavaScript | 291,479 | |PHP | 1,173,916 | |C# | 1,437,800 | |C++ | 174,370 | |Ruby | 353,859 | |Rust | 93,311 | |C | - | |Go | - | |TOTAL | **9,121,300** | ## Usage You can load The Vault dataset using datasets library: ```pip install datasets``` ```python from datasets import load_dataset # Load full class level dataset dataset = load_dataset("Fsoft-AIC/the-vault-class") # specific language (e.g. Python) dataset = load_dataset("Fsoft-AIC/the-vault-class", languages=['Python']) # dataset streaming data = load_dataset("Fsoft-AIC/the-vault-class", streaming= True) for sample in iter(data['train']): print(sample) ``` A back up dataset can be downloaded in azure storage. See [Download The Vault from Azure blob storage](https://github.com/FSoft-AI4Code/TheVault#download-via-link). ## Additional information ### Licensing Information MIT License ### Citation Information ``` @article{manh2023vault, title={The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation}, author={Manh, Dung Nguyen and Hai, Nam Le and Dau, Anh TV and Nguyen, Anh Minh and Nghiem, Khanh and Guo, Jin and Bui, Nghi DQ}, journal={arXiv preprint arXiv:2305.06156}, year={2023} } ``` ### Contributions This dataset is developed by [FSOFT AI4Code team](https://github.com/FSoft-AI4Code).