Error "Nested data conversions not implemented for chunked array outputs"
HI there,
Thanks a lot for releasing this dataset.
I was trying to load this dataset like the following, but was seeing this errorArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
from datasets import load_dataset
dataset = load_dataset('guangyil/laion-coco-aesthetic', split='train')
Can you let me know where I did wrong here? Thanks a lot for your help.
Cheers
Is it because there is only one shard for this dataset and this one shard is too large?
I didn't split it into train/val/test. I think you can try the command directly. Or, you may git clone the repo and use pandas to load the parquet file.
dataset = load_dataset("guangyil/laion-coco-aesthetic")
Hi,
Thanks for the reply.
I tried to load it like this dataset = load_dataset("guangyil/laion-coco-aesthetic")
, still have the same error.
When I tried to load using pandas like this pd.read_parquet('./laion-coco_v3_filter.parquet')
, still the same error.
But this works load_dataset('guangyil/laion-coco-aesthetic', revision="refs/convert/parquet")
. In that branch, there are multiple smaller parquet files.
So maybe this is indeed to do with one large shard file?
Cheers