Datasets:

Modalities:
Tabular
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:

Failed to download dataset

#53
by LiteFland - opened

I attempted to download the dataset by following the Hugging Face Datasets tutorials using this code:
from datasets import load_dataset_builder
ds_builder = load_dataset_builder("ibrahimhamamci/CT-RATE")
ds_builder.download_and_prepare()

But encountered the following error:
Traceback (most recent call last):
File "/data/LiteFland/mamba/envs/hf/lib/python3.8/site-packages/datasets/builder.py", line 1989, in _prepare_split_single
writer.write_table(table)
File "/data/LiteFland/mamba/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 584, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/data/LiteFland/mamba/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 2240, in table_cast
return cast_table_to_schema(table, schema)
File "/data/LiteFland/mamba/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 2194, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
VolumeName: string
Medical material: int64
Arterial wall calcification: int64
Cardiomegaly: int64
Pericardial effusion: int64
Coronary artery wall calcification: int64
Hiatal hernia: int64
Lymphadenopathy: int64
Emphysema: int64
Atelectasis: int64
Lung nodule: int64
Lung opacity: int64
Pulmonary fibrotic sequela: int64
Pleural effusion: int64
Mosaic attenuation pattern: int64
Peribronchial thickening: int64
Consolidation: int64
Bronchiectasis: int64
Interlobular septal thickening: int64
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 2787
to
{'VolumeName': Value(dtype='string', id=None), 'Manufacturer': Value(dtype='string', id=None), 'SeriesDescription': Value(dtype='string', id=None), 'ManufacturerModelName': Value(dtype='string', id=None), 'PatientSex': Value(dtype='string', id=None), 'PatientAge': Value(dtype='string', id=None), 'ReconstructionDiameter': Value(dtype='float64', id=None), 'DistanceSourceToDetector': Value(dtype='float64', id=None), 'DistanceSourceToPatient': Value(dtype='float64', id=None), 'GantryDetectorTilt': Value(dtype='int64', id=None), 'TableHeight': Value(dtype='float64', id=None), 'RotationDirection': Value(dtype='string', id=None), 'ExposureTime': Value(dtype='float64', id=None), 'XRayTubeCurrent': Value(dtype='int64', id=None), 'Exposure': Value(dtype='int64', id=None), 'FilterType': Value(dtype='string', id=None), 'GeneratorPower': Value(dtype='float64', id=None), 'FocalSpots': Value(dtype='string', id=None), 'ConvolutionKernel': Value(dtype='string', id=None), 'PatientPosition': Value(dtype='string', id=None), 'RevolutionTime': Value(dtype='float64', id=None), 'SingleCollimationWidth': Value(dtype='float64', id=None), 'TotalCollimationWidth': Value(dtype='float64', id=None), 'TableSpeed': Value(dtype='float64', id=None), 'TableFeedPerRotation': Value(dtype='float64', id=None), 'SpiralPitchFactor': Value(dtype='float64', id=None), 'DataCollectionCenterPatient': Value(dtype='string', id=None), 'ReconstructionTargetCenterPatient': Value(dtype='string', id=None), 'ExposureModulationType': Value(dtype='string', id=None), 'CTDIvol': Value(dtype='float64', id=None), 'ImagePositionPatient': Value(dtype='string', id=None), 'ImageOrientationPatient': Value(dtype='string', id=None), 'SliceLocation': Value(dtype='float64', id=None), 'SamplesPerPixel': Value(dtype='int64', id=None), 'PhotometricInterpretation': Value(dtype='string', id=None), 'Rows': Value(dtype='int64', id=None), 'Columns': Value(dtype='int64', id=None), 'XYSpacing': Value(dtype='string', id=None), 'RescaleIntercept': Value(dtype='int64', id=None), 'RescaleSlope': Value(dtype='int64', id=None), 'RescaleType': Value(dtype='string', id=None), 'NumberofSlices': Value(dtype='int64', id=None), 'ZSpacing': Value(dtype='float64', id=None), 'StudyDate': Value(dtype='int64', id=None)}
because column names don't match

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 1, in
File "/data/LiteFland/mamba/envs/hf/lib/python3.8/site-packages/datasets/builder.py", line 1005, in download_and_prepare
self._download_and_prepare(
File "/data/LiteFland/mamba/envs/hf/lib/python3.8/site-packages/datasets/builder.py", line 1100, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/data/LiteFland/mamba/envs/hf/lib/python3.8/site-packages/datasets/builder.py", line 1860, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/data/LiteFland/mamba/envs/hf/lib/python3.8/site-packages/datasets/builder.py", line 1991, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 18 new columns (Cardiomegaly, Hiatal hernia, Pericardial effusion, Lymphadenopathy, Lung nodule, Emphysema, Interlobular septal thickening, Consolidation, Atelectasis, Lung opacity, Pulmonary fibrotic sequela, Bronchiectasis, Pleural effusion, Mosaic attenuation pattern, Arterial wall calcification, Peribronchial thickening, Medical material, Coronary artery wall calcification) and 43 missing columns (RotationDirection, GeneratorPower, XYSpacing, SliceLocation, Rows, FocalSpots, Exposure, SingleCollimationWidth, RescaleSlope, SamplesPerPixel, SeriesDescription, ZSpacing, RescaleType, TableFeedPerRotation, PatientPosition, RescaleIntercept, GantryDetectorTilt, FilterType, ExposureTime, CTDIvol, ExposureModulationType, ConvolutionKernel, PatientAge, ReconstructionTargetCenterPatient, ImageOrientationPatient, ManufacturerModelName, StudyDate, DistanceSourceToPatient, DistanceSourceToDetector, PhotometricInterpretation, PatientSex, NumberofSlices, XRayTubeCurrent, TableHeight, Columns, ImagePositionPatient, SpiralPitchFactor, RevolutionTime, TotalCollimationWidth, DataCollectionCenterPatient, ReconstructionDiameter, Manufacturer, TableSpeed).

This happened while the csv dataset builder was generating data using

hf://datasets/ibrahimhamamci/CT-RATE/dataset/multi_abnormality_labels/train_predicted_labels.csv (at revision 4d92f6d4f805e36e2891359c04302705c314fe43)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

It seems to be due to some missing entries in the table. Any suggestions would be greatly appreciated.

Hi,

Thank you very much for your interest in our dataset and letting us know with this. Different csv files are for labels and metadata (so the columns are different). But I can separate them into different configurations.

@ibrahimhamamci Hi, any reason I always stuck at this step forever if I git clone directly your full repository?

image.png

I use the command "git clone https://huggingface.co/datasets/ibrahimhamamci/CT-RATE"

Do you have any suggestion that I can download your data in a more convenient way through huggingface?

Thanks!

Hello @LiteFland and @wander666 ,

I will be assisting you with this. We will be examining the issue about "load_dataset_builder".

For the git clone problem, files larger than 10MB remain in LFS. Therefore, you need to first install git LFS by running:

git lfs install

Then, you need to execute the following command to download the data:

git lfs clone https://huggingface.co/datasets/ibrahimhamamci/CT-RATE

This command also downloads LFS files. Additionally, you might need to run:

huggingface-cli lfs-enable-largefiles .

since there are files larger than 5GB. Please refer to: https://huggingface.co/docs/hub/repositories-getting-started

However, I recommend using the huggingface_hub library to download the dataset. I have created a GitHub repository for the example scripts. By simply editing snapshot_run.py with your access token and running the script, it will download the entire dataset. I have also added two other scripts to the repository for downloading validation and train volumes separately (please edit the token part in those as well if you wish to download them separately). These scripts are exclusively for downloading volumes, so if you use them, you should download CSV files (and models) separately. Here are the scripts: https://github.com/sezginerr/example_download_script

I hope this helps!

Hi @ibrahimhamamci , thank you for releasing this dataset! How big is the dataset in total? There's no progress bar on git lfs clone, it's currently downloaded over 12TB and still going. Thanks!

edit: download crashed at 20.5TB, out of disk space

Hi @farrell236 ,
This is interesting. Previously, we used the du -sh command, and it showed 12TB of disk space (in the GitHub issue). Now, we wrote a Python script to check if it's a bug related to the du -sh command, but it still shows 11921.90 GB. It's interesting that there's no progress bar on git lfs clone, as I have it in my case: "Updating files: 0% (10/50201)". Could you try the scripts we provided to download the dataset? I believe using them would be better to ensure there are no strange Git bugs.

Thanks for your reply! I believe git keeps a tracked copy in the hidden .git directory. Did you du -sh on ./CT-RATE or ./CT-RATE/dataset? If ./CT-RATE/dataset is 12T, then it's likely there's also 12TB in .git (means I was 4TB short when it crashed 😭). I'll check out the download scripts πŸ™‚

It is models + dataset for volumes but not csv files. But yes probably git copies them .git to later check downloaded files.

Hi @farrell236 , @wander666 , and @LiteFland , you might need to set local_dir_use_symlinks=False in hf_hub_download and snapshot_download if you want them not to create symbolic links to the cache file (and copy everything to the local dir as well). But it will require more disk space (2x times higher). I would recommend to download dataset part by part and remove the cache file (under ".cache/huggingface") after the part is downloaded before moving the next part if you have limited space and don't want to use symbolic links. They work fine with our code though. See: https://huggingface.co/docs/huggingface_hub/package_reference/file_download for more information.

Dear @LiteFland , I have split the CSV files into different configurations. The dataset builder should now work with the specific configuration (labels, reports, or metadata).

Sign up or log in to comment