Parquet files instead of jsonl.gz
Hey Sebastien!
Thank you so much for providing this dataset, keeping it updated and also adding the Python script to update it.
I was wondering about the use of jsonl.gz
files: is there any particular motivation behind this particular choice, like serialization in downstream tasks or are you planning on adding more nodes in the future or were there already some changes to the nodes?
If you don't expect any changes in the near future maybe switching to parquet files would be a good option:
- it's faster (read & write) and supported by pandas, DuckDB and even frontend applications with JS only
- compression should be on par with gzipped json
- great compatibility with Huggingface Datasets (training splits etc. easier)
Thanks again for making the effort to publish it here.
Dominik
Hi Dominik,
Thanks for your message,
JSONL is easy to generate and manipulate with bash tools like jq, sed, grep, awk etc ...
By default, Hugginface converts it into parquet, which, I agree, is much better adapted to efficient data processing tasks.
So when you're using the datasets
python package, under the hood, it's already optimized.
If I have more resources, I would like to update it soon and retrain the EuroVoc Classifier 🤞
That's awesome - I wasn't aware that HF is capable to convert jsonl.gz files to parquet automatically!
Thanks for your reply, closing this discussion.