I meet with a problem unable to download the openwebtext dataset
Thank you. I meet such a problem and I find a lot of solutions but it does not work.
Traceback (most recent call last):
File "/home/lizhuoran/mydata/data_new/diffusion-llm/Score-Entropy-Discrete-Diffusion-main/load_data.py", line 4, in
dataset = load_dataset(name, cache_dir=cache_dir, download_mode="force_redownload")
File "/home/lizhuoran/anaconda3/envs/sedd/lib/python3.9/site-packages/datasets/load.py", line 2074, in load_dataset
builder_instance = load_dataset_builder(
File "/home/lizhuoran/anaconda3/envs/sedd/lib/python3.9/site-packages/datasets/load.py", line 1795, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/lizhuoran/anaconda3/envs/sedd/lib/python3.9/site-packages/datasets/load.py", line 1671, in dataset_module_factory
raise e1 from None
File "/home/lizhuoran/anaconda3/envs/sedd/lib/python3.9/site-packages/datasets/load.py", line 1617, in dataset_module_factory
can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read()
File "/home/lizhuoran/anaconda3/envs/sedd/lib/python3.9/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
my test code is:
from datasets import load_dataset
import os
os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'
name = "wikitext"
cache_dir = "./data"
dataset = load_dataset("wikitext", name="wikitext-103-raw-v1", cache_dir=cache_dir)
Try the new branch convert-parquet-full and see if that works for you
@zhuoran-li Did it work for you?