configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: repo
dtype: string
- name: file
dtype: string
- name: code
dtype: string
- name: file_length
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: extension_type
dtype: string
splits:
- name: train
num_bytes: 3590067176.125193
num_examples: 391496
download_size: 1490724325
dataset_size: 3590067176.125193
Dataset Card for "AlgorithmicResearchGroup/arxiv_python_research_code"
Dataset Description
https://huggingface.co/datasets/AlgorithmicResearchGroup/arxiv_deep_learning_python_research_code
Dataset Summary
AlgorithmicResearchGroup/arxiv_deep_learning_python_research_code contains over 1.49B of source code files referenced strictly in ArXiv papers. The dataset serves as a curated dataset for Code LLMs.
How to use it
from datasets import load_dataset
# full dataset (1.49GB of data)
ds = load_dataset("ArtifactAI/arxiv_deep_learning_python_research_code", split="train")
# dataset streaming (will only download the data as needed)
ds = load_dataset("ArtifactAI/arxiv_deep_learning_python_research_code", streaming=True, split="train")
for sample in iter(ds): print(sample["code"])
Dataset Structure
Data Instances
Each data instance corresponds to one file. The content of the file is in the code
feature, and other features (repo
, file
, etc.) provide some metadata.
Data Fields
repo
(string): code repository name.file
(string): file path in the repository.code
(string): code within the file.file_length
: (integer): number of characters in the file.avg_line_length
: (float): the average line-length of the file.max_line_length
: (integer): the maximum line-length of the file.extension_type
: (string): file extension.
Data Splits
The dataset has no splits and all data is loaded as train split by default.
Dataset Creation
Source Data
Initial Data Collection and Normalization
34,099 active GitHub repository names were extracted from ArXiv papers from its inception through July 21st, 2023 totaling 773G of compressed github repositories.
These repositories were then filtered, and the code from each file that mentions ["torch", "jax", "flax", "stax", "haiku", "keras", "fastai", "xgboost", "caffe", "mxnet"] was extracted into 1.4 million files.
Who are the source language producers?
The source (code) language producers are users of GitHub that created unique repository
Personal and Sensitive Information
The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub.
Additional Information
Dataset Curators
Matthew Kenney, AlgorithmicResearchGroup, [email protected]
Citation Information
@misc{arxiv_deep_learning_python_research_code,
title={arxiv_deep_learning_python_research_code},
author={Matthew Kenney},
year={2023}
}