Download retry and partial content errors

#29
by jwkirchenbauer - opened

Hi!

I am using the url list to download the dataset in parallel.

wget 'https://data.together.xyz/redpajama-data-1T/v1.0.0/urls.txt' -O urls.txt

while read line; do
    dload_loc=${line#"https://data.together.xyz/redpajama-data-1T/v1.0.0/"}
    dir_name=$(dirname "$dload_loc")
    mkdir -p $dir_name
    echo "--retry $line -O $dload_loc" >> wget_commands.txt
done < urls.txt

cat wget_commands.txt | xargs -l -P 128 bash -c "wget \$0 \$1 \$2 \$3"

I am getting a lot of errors like:

2023-12-06 14:11:32 (1.06 MB/s) - Read error at byte 1288925184/2022347763 (Connection timed out). Retrying.

and

HTTP request sent, awaiting response... 206 Partial Content
Length: 1813022210 (1.7G), 472159746 (450M) remaining

While the overall downloads do continue, I am worried that a lot of the json zips are not all downloadable and that I will lose a lot of the data in this process.

Is there a recommended way to do the download that is more robust or doesn't have these issues?
(I initially tried just calling load_dataset but even with a large num_proc argument, this was still a lot slower than the pure wget's I think)

I'm having the same Issue. My workaround was adding error handling when reading data, but this results in data loss (https://discuss.huggingface.co/t/error-handling-in-iterabledataset/72827/2)

Sign up or log in to comment