arxiv_research_code / README.md
matthewkenney's picture
Update README.md
edeb2c0 verified
metadata
dataset_info:
  features:
    - name: repo
      dtype: string
    - name: file
      dtype: string
    - name: code
      dtype: string
    - name: file_length
      dtype: int64
    - name: avg_line_length
      dtype: float64
    - name: max_line_length
      dtype: int64
    - name: extension_type
      dtype: string
  splits:
    - name: train
      num_bytes: 63445188751
      num_examples: 4716175
  download_size: 21776760509
  dataset_size: 63445188751
license: mit
task_categories:
  - text-generation
language:
  - en
pretty_name: arxiv_research_code
size_categories:
  - 10B<n<100B

Dataset Card for "AlgorithmicResearchGroup/arxiv_research_code"

Dataset Description

https://huggingface.co/datasets/AlgorithmicResearchGroup/arxiv_research_code

Dataset Summary

ArtifactAI/arxiv_research_code contains over 21.8GB of source code files referenced strictly in ArXiv papers. The dataset serves as a curated dataset for Code LLMs.

How to use it

from datasets import load_dataset

# full dataset (21.8GB of data)
ds = load_dataset("AlgorithmicResearchGroup/arxiv_research_code", split="train")

# dataset streaming (will only download the data as needed)
ds = load_dataset("AlgorithmicResearchGroup/arxiv_research_code", streaming=True, split="train")
for sample in iter(ds): print(sample["code"])

Dataset Structure

Data Instances

Each data instance corresponds to one file. The content of the file is in the code feature, and other features (repo, file, etc.) provide some metadata.

Data Fields

  • repo (string): code repository name.
  • file (string): file path in the repository.
  • code (string): code within the file.
  • file_length: (integer): number of characters in the file.
  • avg_line_length: (float): the average line-length of the file.
  • max_line_length: (integer): the maximum line-length of the file.
  • extension_type: (string): file extension.

Data Splits

The dataset has no splits and all data is loaded as train split by default.

Dataset Creation

Source Data

Initial Data Collection and Normalization

34,099 active GitHub repository names were extracted from ArXiv papers from its inception through July 21st, 2023 totaling 773G of compressed github repositories.

These repositories were then filtered, and the code from each file was extracted into 4.7 million files.

Who are the source language producers?

The source (code) language producers are users of GitHub that created unique repository

Personal and Sensitive Information

The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub.

Additional Information

Dataset Curators

Matthew Kenney, AlgorithmicResearchGroup, [email protected]

Citation Information

@misc{arxiv_research_code,
    title={arxiv_research_code},
    author={Matthew Kenney},
    year={2023}
}