Comparison with DCLM-baseline

#1
by hankcs - opened

Thank you for sharing your work. The improvement against FineWeb is very significant! I have this follow-up question regarding comparison with DCLM-baseline, which could be the SoTA dataset that also outperforms FineWeb by a large margin. Is it possible to benchmark your dataset and DCLM-baseline?

LLM360 org

Hi @hankcs ,

Thank you for taking the time to notice our work!

We’ve conducted a preliminary study using the DCLM-baseline (DCLMB) by training an 8B model on a subset of both the datasets. DCLMB has shown solid performance, especially on MMLU. At around 800B tokens, the 8B model achieves a score of about 36 on MMLU using DCLMB but it performs closer to random on TxT360. On other metrics, such as BoolQ, our dataset performs similarly to DCLMB. (please note that this setup differs from the experiment in our blog post, where we used an 8x8B MoE model).

However, there are important differences in purpose and key choices between these datasets:

  • Dataset purpose: Our goal is to create a production-ready dataset that captures all information from the internet. In contrast, DCLMB sets a research baseline. As their data card notes, "DCLM-Baseline is not intended for training production-ready models or for specific domains such as code and math."

  • Filtering approach: We are taking a conservative approach by applying only rules to filter out non-natural text (e.g., text with excessive symbols), while DCLMB uses a classifier trained on instruction-style datasets like OpenHermes and r/ExplainLikeImFive. As a result, TxT360 might include more diverse and long-tail content from the internet, while DCLMB contains data that leans more toward instruction-like or simpler language. We’re actually also interested in this direction and are investigating it further.

Given these differences, the two datasets aren’t fully comparable. For instance, it might be more meaningful to compare a subset of TxT360 to DCLMB after applying a similar classifier. If you’re deciding between the datasets, I recommend selecting based on your specific needs, or even consider mixing these datasets at an appropriate ratio.

Best,
Hector L

Thank you Hector for the detailed response. Yes, it makes sense TxT360 was made for practical training purposes.

hankcs changed discussion status to closed

Sign up or log in to comment