Change the Download Command to account for duplicate filenames
#9
by
jurassicpark
- opened
Downloading directly via wget produces issues because of the common crawl datasets have duplicate filenames across the various years.
This creates issues pausing and resuming the download process and also verifying checksums.
This PR changes the download command to download using the subdirectory structure present in the url.
e.g.https://data.together.xyz/redpajama-data-1T/v1.0.0/arxiv/arxiv_023827cd-7ee8-42e6-aa7b-661731f4c70f.jsonl
downloads to
arxiv/arxiv_023827cd-7ee8-42e6-aa7b-661731f4c70f.jsonl
danfu09
changed pull request status to
merged