Datasets:

Modalities:
Tabular
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:

Download issue

#64
by Rakesha0105 - opened

hi Rakesh,

Am trying to download this data by using https://github.com/sezginerr/example_download_script separate validation and train scripts. hence it is large data am downloading separately. while downloading it is creating same size of temporary file in c drive. its causing storage issue so could you please help us on this.
is there any alternative method to download this.

Hi @Rakesha0105 , do you have storage on other drives (other than C)? I see that you do not have enough space only on the C drive. You can move the .cache folder to the drive where you have space and create a symbolic link in the C drive to the new .cache location in that case (make sure that the name of the symbolic link is .cache). This should fix your storage problem. The scripts will download data only to the .cache folder (if you do not set local_dir_use_symlinks=False ), not the local folder where you are downloading. It creates a symbolic link to the .cache folder in the local folder.

Thanks for the reply.
yes i have storage of 20 TB in D drive where i am downloading. due to storage issue i have deleted the cache folder in C drive.
Now the download has been interrupted. i created .cache in the folder where i am downloading which means D drive. again i triggered download but it is downloading from scrach and also the temporary files started download in C drive.
can i map the D Drive only for downloading temp files.?

could you please help me on this.

where can i see this (if you do not set local_dir_use_symlinks=False ) in that script. i didn't change anything in the script.

Setting local_dir_use_symlinks=False will not solve your problem.
You need to create a symbolic link to the D drive. If you do not do that, it will recreate the .cache folder. For example:

  1. Let's say you have a .cache folder in C://.cache.
  2. You moved .cache to D://, so now it is located in D://.cache, and you do not have a .cache folder in C://.
  3. Now, you need to create a symbolic link to D://.cache in C://. This can be done by:
    If you have a Windows-based system (which I assume is the case):
  • Go to the C:// drive in the cmd.
  • Run mklink /D .cache D:.cache.
    This will create a symbolic link in your C:// drive, so the code will not recreate .cache. Hope this helps.

C:\Users\User_id>mklink /D .cache D:.cache
i tried below but am getting bellow even though i have admin rights.
You do not have sufficient privilege to perform this operation.

Strange I can create it:

image.png

Are you sure you run cmd as administrator?

Anyways, you should be able to change the cache_dir location with cache_dir argument in hf_hub_download. Just add cache_dir = "D://my_cache_dir" to the arguments of the function.

hi @Rakesha0105 , @sezginer , I downloaded the dataset by adapting @sezginer 's example download script (https://github.com/sezginerr/example_download_script/blob/main/download_only_train_data.py), may be of use for you as well.

import shutil

import pandas as pd

from huggingface_hub import hf_hub_download
from tqdm import tqdm


split = 'train'
batch_size = 100
start_at = 0

repo_id = 'ibrahimhamamci/CT-RATE'
directory_name = f'dataset/{split}/'
hf_token = 'HUGGINGFACE_API_KEY'

data = pd.read_csv(f'{split}_labels.csv')

for i in tqdm(range(start_at, len(data), batch_size)):

    data_batched = data[i:i+batch_size]

    for name in data_batched['VolumeName']:
        folder1 = name.split('_')[0]
        folder2 = name.split('_')[1]
        folder = folder1 + '_' + folder2
        folder3 = name.split('_')[2]
        subfolder = folder + '_' + folder3
        subfolder = directory_name + folder + '/' + subfolder

        hf_hub_download(repo_id=repo_id,
            repo_type='dataset',
            token=hf_token,
            subfolder=subfolder,
            filename=name,
            cache_dir='./',
            local_dir='data_volumes',
            local_dir_use_symlinks=False,
            resume_download=True,
            )

    shutil.rmtree('./datasets--ibrahimhamamci--CT-RATE')

This script downloads the entire dataset in batches by setting the batch_size variable. When the batch is downloaded, it deletes the cache. If you need to interrupt the download or it crashes, find the index where the last download was (tqdm index) and then set the start_at variable accordingly. Delete the cache folder before starting to resume download (edit: just realized could have wrapped it in a try except block 🙃).

@sezginer , please let me know if you like this script to be a PR somewhere 🙂

Hi @farrell236 , I can create a new folder in the Hugging Face repository, push my scripts to that folder as well, and maybe you can send a pull request for your script as well. And I can add some info about download in the dataset card. This brings too much confusion for now.

@sezginer Thank You so much.
I tried with Run as Administrator only but its failed. I checked with my local IT team. Its working now. i started download again by mapping .cache to Drive. In-between its stopped download when i retriggered its failed to start from where its sopped.
getting below error:
huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.

even though have good internet.

@farrell236 Thank you so much.

This script is working fine we can download files from where it is stopped. or by checking the downloaded files count we can start where its stopped.

Now From this script am downloading data its working.

@farrell236 @sezginer @Rakesha0105
Hi, I wonder how long it will take to download this dataset. Thank you!

Hi @zhengc ,
While I have not downloaded the entire dataset from Hugging Face, the download speed really depends on your connection. In our cluster, it is around 60 MB/s for downloads from HF. The dataset is approximately 12 TB, so it should take around 2-3 days in this case.

Hi @farrell236 ,

I realized that I forgot to do this: "I can create a new folder in the Hugging Face repository, push my scripts to that folder, and maybe you can send a pull request for your script as well." 🙃. It would be great if you could push your previous script to this repo to download_scripts folder (if you like of course 🙂).

@sezginer opened pr #68 🙂

Hi @farrell236 , we have merged your PR but I believe this is fix_metadata script. I remember that you have PR'ed it somewhere as well but could not find it. Anyways, it can stay there but could you send download_only_train_data.py script to the same folder as well? Then I'll push other scripts and edit the dataset card.

@sezginer sorry, forgot which one it was 😅 that script wasn't pr'ed, it was just posted above in this discussions thread.

Here it is in PR #69

Thank you very much @farrell236 !

Hello, @zhengc
I would like to ask if you have already downloaded the dataset, and if you have done so, could you share a download method?
Thanks.

@ibrahimhamamci , @sezginer could you please add just a few words in the dataset card about how this dataset should be downloaded properly? This would help a lot!

Some things that are unclear to me:

Hi @mishgon ,

1- I don't recommend using Hugging Face datasets, as nii.gz images are currently unsupported. You can only use it for CSV files (for labels, metadata etc.).
2- The script you mentioned is for batch downloading the entire dataset. You can choose to download the train or validation split, and it will download 100 images at a time iteratively for the selected split. Again, I don't recommend downloading the entire dataset using git clone or hg dataset clone as it often fails. That script is much more safe.
3- Please see these xlsx files: https://huggingface.co/datasets/ibrahimhamamci/CT-RATE/tree/main/dataset/multi_abnormality_labels
4- The script adds nii.gz headers from the metadata csv files, which might or might not be relevant for your task. Nii.gz files have headers similar to DICOM tags, containing some of the spacing parameters. If those headers are not set correctly, you might encounter issues visualizing the nii.gz images or using certain pretrained models, which could produce strange results. However, if you're only planning to use the pixel values from the volume arrays in the nii.gz files, and the metadata information from our csv file during loading the data (as we did in CT-CLIP), then you don't need to worry about the headers.
That being said, I will re-upload the dataset soon with the headers fixed, along with other corrections and changes and the int16 dtype (which will make the dataset considerably smaller).

Please let me know more details about what you're trying to achieve, and I can better guide you.

Best,
Sezgin

Sign up or log in to comment