Access to individual pdf
Would it be possible to provide access to individual pdfs named after their id rather than batch access the same way data is stored in gs://arxiv-dataset?
This would vastly help in data curation and filtering for data mining. Even a requester pays bucket works fine.
Bioarxiv itself provide meca files which is not tracible to the original pdf.
Also, will this data get periodically updated?
@alimoezzi I'm not sure whether I understand your question correctly. the pdf files are named after their DOIs. And we provide the metadata for them. https://huggingface.co/datasets/laion/biorxiv_metadata
We will update the rXivs dumps after every 12 months. Because updates are linear for most dumps so a monthly update won't make sense.
I think what he means is that this is unsuitable for RAG, because he cannot use the metadata, to retrieve the individual pdf from huggingface.
@sleeping4cat
I would like to be able to access individual papers by doi. You uploaded batches like bioAAA.zip, bioAAB.zip, ... each more than 40gb to download and now to create subsets of data I need to download all batches and look through them. Instead of batches you could upload files by uuid and then provide a csv for the mappings or their doi. You can also create another dataset or bucket for this purpose.
@endomorphosis
he's also right for RAG or my use case data mining I would need gather subsets of pdfs of certain topics I handpick. biorxiv (also medrxiv) itself provide batch access in their bucket s3://biorxiv-src-monthly explained https://www.biorxiv.org/tdm. It would be very helpful if you could help access to individual papers like the same effort contributed by gs://arxiv-dataset here https://www.kaggle.com/datasets/Cornell-University/arxiv.
@endomorphosis @alimoezzi unfortunately, this is not one of the intended purpose of this dataset. this dataset was made to reflect the needs of our open-sci and projects where a need to train language models from scratch is needed including fine-tuning. another lab we are collaborating with is parsing all these rXivs datasets that are created by us and uploading them on their HF org. We are going to link that repository in the coming weeks once that team has finished working.
You uploaded batches like bioAAA.zip, bioAAB.zip, ... each more than 40gb to download and now to create subsets of data I need to download all batches and look through them.
it was a necessary step so that we can upload on HF easily. We didn't want to store all 200K+ files in the root path nor in a folder inside root dir of the repo as it was inefficient. plus it is a highly bad way to upload such vast quantity.
he's also right for RAG or my use case data mining I would need gather subsets of pdfs of certain topics I handpick. biorxiv (also medrxiv) itself provide batch access in their bucket s3://biorxiv-src-monthly explained https://www.biorxiv.org/tdm. It would be very helpful if you could help access to individual papers like the same effort contributed by gs://arxiv-dataset here https://www.kaggle.com/datasets/Cornell-University/arxiv.
unfortunately, it's not in our plan at the moment. our goal is to share the data in the most easiest form possible. if you have a particular usecase, you have to address that on your side.