Sample
Hey,
Really interesting dataset, is there any possibility to have a sample of it ? With the digitilized images ? Complicated to conceptualize it without downloading everything :D
Cheers,
Arnault
If you want to avoid downloading as much as possible, you could load a single of the underlying parquet files from the hub (i.e. one of the files here)
from datasets import load_dataset
dataset = load_dataset('parquet', data_files='https://huggingface.co/datasets/biglam/berlin_state_library_ocr/resolve/main/data/train-00000-of-00053-54c67aaf0fee067a.parquet')
dataset['train'][0]
This will only download ~183MB of data.
With the digitilized images ?
The dataset is based on this dataset which only includes the OCR text + metadata. Getting them via an API might be possible, but I'm not certain about this. cc'ing @cneud , who might know about a way of accessing the images.
Oh that's great for the snapshot, I did not see the sampled parquet files, thank you very much ! I'm looking forward to cneud response to know if we can get easily the pictures.
Thanks for the ping @davanstrien !
@agomberto
Nice to see interest in this dataset. If you would like to obtain also images, there is the possibility to get them via IIIF API v2.1 as described here: https://lab.sbb.berlin/dc/?lang=en (scroll down a bit until the IIIF section). You only need the PPN
identifier - as in e.g. https://content.staatsbibliothek-berlin.de/dc/PPN867445300-00000010/full/full/0/default.jpg for the full resolution image and/or https://content.staatsbibliothek-berlin.de/dc/PPN867445300-0010.ocr.xml for the corresponding OCR file of page 10 of the document with PPN=867445300
. Last but not least, to see all pages for a document, you can go to the digital collections and again use the PPN
like in https://digital.staatsbibliothek-berlin.de/werkansicht?PPN=PPN867445300.
Thank you very much @davanstrien and @cneud really great insights from your comments :)