Datasets:
GAIR
/

Languages:
English
ArXiv:
Libraries:
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

By using this data, you agree to comply with the original usage licenses of all sources contributing to MathPile. If the source data of this dataset is subject to a more restrictive license than CC BY-NC-SA 4.0, then this dataset conforms to that more stringent licensing. In all other scenarios, it is governed by the CC BY-NC-SA 4.0 license. Access to this dataset is granted automatically once you accept the license terms and complete all the required fields below.

Log in or Sign Up to review the conditions and access this dataset content.


🔥Update:

  • [2023/01/06] We release the commercial-use version of MathPile, namely MathPile_Commercial.
  • [2023/01/06] We release the new version (v0.2, cleaner version) of MathPile. It has been updated to the main branch (also the v0.2 branch). The main updates are as follows:
    • fixed a problem with the display of mathematical formulas in the Wikipedia subset, which was caused by the HTML conversion to markdown;
    • fixed unclosed caption parentheses in the image environment in arXiv and macro command substitutions (as suggested in issue 1), as well as improper line wrapping in paragraphs.
    • If you would like to download the original MathPile, you can download it by setting the revision parameter to v0.1.
  • [2023/12/29] Thanks for your interest in our dataset. We strongly recommend that you complete all the information on the form when applying to facilitate our review process.

Dataset Card for Dataset Name

We introduce MathPile a diverse and high-quality math-centric corpus comprising about 9.5 billion tokens. our work is significantly different from the previous work in the following characteristics:

  • Math-centric: MathPile uniquely caters to the math domain, unlike general domain-focused corpora like Pile and RedPajama, or multilingual-focused ones like ROOTS and The Stack. While there are math-centric corpora, they're often either closed-sourced, like Google's Minerva and OpenAI's MathMix, or lack diversity, such as ProofPile and OpenWebMath.

  • Diversity: MathPile draws from a wide range of sources: Textbooks (including lecture notes), arXiv, Wikipedia, ProofWiki, StackExchange, and Web Pages. It encompasses mathematical content suitable for K-12, college, postgraduate levels, and math competitions. This diversity is a first, especially with our release of a significant collection of high-quality textbooks (~0.19B tokens).

  • High-Quality: We adhered to the principle of less is more, firmly believing in the supremacy of data quality over quantity, even in the pre-training phase. Our meticulous data collection and processing efforts included a complex suite of preprocessing, prefiltering, cleaning, filtering, and deduplication, ensuring the high quality of our corpus.

  • Data Documentation: To enhance transparency, we've extensively documented MathPile. This includes a dataset sheet (see Table 5 in our paper) and quality annotations for web-sourced documents, like language identification scores and symbol-to-word ratios. This gives users flexibility to tailor the data to their needs. We've also performed data contamination detection to eliminate duplicates from benchmark test sets like MATH and MMLU-STEM.

Dataset Details

Refer to Appendix A in our paper for the MathPile Dataset Sheet.

How to download MathPile?

Currently, we recommend that you download it locally from the command line (such as huggingface-cli) instead of the python function load_dataset("GAIR/MathPile") (due to a possible network issue), unpack the gz file, and then load the jsonl file. Some commands that might be helpful are as follows

$ huggingface-cli download --resume-download --repo-type dataset GAIR/MathPile --local-dir /your/path/ --local-dir-use-symlinks False

$ cd /your/path/
$ find . -type f -name "*.gz" -exec gzip -d {} \;

Later we will also support the datasets loading via load_dataset("GAIR/MathPile"). Stay tuned.

Dataset Description

  • Curated by: GAIR Lab, SJTU
  • Funded by [optional]: GAIR Lab, SJTU
  • Language(s) (NLP): English
  • License: CC BY-NC-SA 4.0

Dataset Sources

Uses

Direct Use

To develop mathematical language models.

Out-of-Scope Use

This dataset may be not suitable for scenarios unrelated to mathematics or reasoning.

Dataset Structure

{
    "text": ...,
    "SubSet": "CommomCrawl" | "StackExchange" | "Textbooks" | "Wikipedia" | "ProofWiki" | "arXiv"
    "meta": {"language_detection_score": , "idx": , "contain_at_least_two_stop_words": ,
}

Dataset Creation

Curation Rationale

To create a diverse and high-quality math-centric corpus, thereby enhancing the mathematical reasoning abilities of language models.

Source Data

Data Collection and Processing

We sourced data from Textbooks, lecture notes, arXiv, Wikipedia, ProofWiki, StackExchange, and Common Crawl. Throughout the MathPile development, we meticulously source and gather data, applying a rigorous and math-specific pipeline. This pipeline encompasses various stages such as preprocessing, prefiltering, language identification, cleaning and filtering, and deduplication, all aimed at maintaining the high quality of the corpus. Please see our paper for more details.

Annotations

We provided quantity annotations (such as language identification scores and the ratio of symbols to words) for documents from Web pages (i.e., Common Crawl and Wikipedia). These annotations offer future researchers and developers the flexibility to filter the data according to their criteria, tailoring it to their specific needs.

Personal and Sensitive Information

The corpus may potentially contain academic emails and the author's name, as seen in papers from sources like arXiv. However, we view this as justifiable and within acceptable bounds.

Bias, Risks, and Limitations

  • The decisions made during the data collection and processing phases might not always be optimal.
  • Some documents in MathPile may not always be of the highest quality. We are committed to continually refining and optimizing this corpus.

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset.

Citation

If you find our work useful or use MathPile, please cite our paper:

@article{wang2023mathpile,
  title={Generative AI for Math: Part I -- MathPile: A Billion-Token-Scale Pretraining Corpus for Math},
  author={Wang, Zengzhi and Xia, Rui and Liu, Pengfei},
  journal={arXiv preprint arXiv:2312.17120},
  year={2023}
}

Dataset Card Authors

Zengzhi Wang

Dataset Card Contact

[email protected], [email protected]

Downloads last month
274

Models trained or fine-tuned on GAIR/MathPile