The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('json', {}), NamedSplit('validation'): ('csv', {'sep': '\t'}), NamedSplit('test'): ('csv', {'sep': '\t'})}
Error code:   FileFormatMismatchBetweenSplitsError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

NER Fine-Tuning

We use Flair for fine-tuning NER models on HIPE-2022 datasets from HIPE-2022 Shared Task.

All models are fine-tuned on A10 (24GB) and A100 (40GB) instances from Lambda Cloud using Flair:

$ git clone https://github.com/flairNLP/flair.git
$ cd flair && git checkout 419f13a05d6b36b2a42dd73a551dc3ba679f820c
$ pip3 install -e .
$ cd ..

Clone this repo for fine-tuning NER models:

$ git clone https://github.com/stefan-it/hmTEAMS.git
$ cd hmTEAMS/bench

Authorize via Hugging Face CLI (needed because hmTEAMS is currently only available after approval):

# Use access token from https://huggingface.co/settings/tokens
$ huggingface-cli login

We use a config-driven hyper-parameter search. The script flair-fine-tuner.py can be used to fine-tune NER models from our Model Zoo.

Additionally, we provide a script that uses Hugging Face AutoTrain Advanced (Space Runner) to fine-tung models. The following snippet shows an example:

$ pip3 install autotrain-advanced
$ export HF_TOKEN="" # Get token from: https://huggingface.co/settings/tokens
$ autotrain spacerunner --project-name "flair-hipe2022-de-hmteams" \
  --script-path /home/stefan/Repositories/hmTEAMS/bench \
  --username stefan-it \
  --token $HF_TOKEN \
  --backend spaces-t4s \
  --env "CONFIG=configs/hipe2020/de/hmteams.json;HF_TOKEN=$HF_TOKEN;REPO_NAME=stefan-it/autotrain-flair-hipe2022-de-hmteams"

The concrete implementation can be found in script.py.

Benchmark

We test our pretrained language models on various datasets from HIPE-2020, HIPE-2022 and Europeana. The following table shows an overview of used datasets.

Language Datasets
English AjMC - TopRes19th
German AjMC - NewsEye
French AjMC - ICDAR-Europeana - LeTemps - NewsEye
Finnish NewsEye
Swedish NewsEye
Dutch ICDAR-Europeana

Results

We report averaged F1-score over 5 runs with different seeds on development set:

Model English AjMC German AjMC French AjMC German NewsEye French NewsEye Finnish NewsEye Swedish NewsEye Dutch ICDAR French ICDAR French LeTemps English TopRes19th Avg.
hmBERT (32k) Schweter et al. 85.36 ± 0.94 89.08 ± 0.09 85.10 ± 0.60 39.65 ± 1.01 81.47 ± 0.36 77.28 ± 0.37 82.85 ± 0.83 82.11 ± 0.61 77.21 ± 0.16 65.73 ± 0.56 80.94 ± 0.86 76.98
hmTEAMS (Ours) 86.41 ± 0.36 88.64 ± 0.42 85.41 ± 0.67 41.51 ± 2.82 83.20 ± 0.79 79.27 ± 1.88 82.78 ± 0.60 88.21 ± 0.39 78.03 ± 0.39 66.71 ± 0.46 81.36 ± 0.59 78.32
Downloads last month
120