The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Dataset description

Number of articles for the English wiki:

  • 2014: 4.599.592
  • 2016: 5.144.403
  • 2018: 5.599.764
  • 2020: 6.037.287
  • 2022: 6.291.973
  • 2024: 6.629.861

with 8k documents per gzipped json, we get approx. 800 files per year, so aprrox. 3500 files in total. Approx. 190-200 MB per bin of 8k examples, 63 in gzipped mode.

Dataset sizes

There are different splits and dataset sizes, which are all subsets of the full set.

Config Train Validation Estimate
tiny 16k docs (2 shards) 8k docs (1 shard) 0.1GB
small 800k docs (100 shards) 16k docs (2 shards) 4GB
medium 6M docs (750 shards) 16k docs (2 shards) 30GB
large 12M docs (1500 shards) 24k docs (3 shards) 59GB
full 28M docs (3497 shards) 32k docs (4 shards) 137GB
Downloads last month
143