norbert / README.md
ltgoslo's picture
Mentioned NorBERT 3
080e532 verified
metadata
language: 'no'
license: cc-by-4.0
pipeline_tag: fill-mask
tags:
  - norwegian
  - bert
thumbnail: https://raw.githubusercontent.com/ltgoslo/NorBERT/main/Norbert.png

Quickstart

Release 1.1 (February 13, 2021)

Please check also our newer models: NorBERT 2 and NorBERT 3, trained on a much larger corpus and with better architectures.

Download the model here:

  • Cased Norwegian BERT Base: 216.zip

More about NorBERT training corpora and training procedure: http://norlm.nlpl.eu/

Associated code: https://github.com/ltgoslo/NorBERT

Check this paper for more details:

Andrey Kutuzov, Jeremy Barnes, Erik Velldal, Lilja Øvrelid, Stephan Oepen. Large-Scale Contextualised Language Modelling for Norwegian, NoDaLiDa'21 (2021)

NorBERT was trained as a part of NorLM, a joint initiative of the projects EOSC-Nordic (European Open Science Cloud) and SANT (Sentiment Analysis for Norwegian), coordinated by the Language Technology Group (LTG) at the University of Oslo.

The computations were performed on resources provided by UNINETT Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway.

NorBERT-3

In 2023, we released a new family of NorBERT-3 language models for Norwegian. In general, we now recommend using these models:

NorBERT-3 is described in detail in this paper: NorBench – A Benchmark for Norwegian Language Models (Samuel et al., NoDaLiDa 2023)