DictaBERT
Collection
Collection of state-of-the-art language model for Hebrew, finetuned for various tasks, as detailed in the article: https://arxiv.org/abs/2308.16687
•
17 items
•
Updated
State-of-the-art language model for Hebrew, released here.
This is the fine-tuned BERT-large model for the named-entity-recognition task.
For the bert models for other tasks, see here.
Sample usage:
from transformers import pipeline
oracle = pipeline('ner', model='dicta-il/dictabert-large-ner', aggregation_strategy='simple')
# if we set aggregation_strategy to simple, we need to define a decoder for the tokenizer. Note that the last wordpiece of a group will still be emitted
from tokenizers.decoders import WordPiece
oracle.tokenizer.backend_tokenizer.decoder = WordPiece()
sentence = '''דוד בן-גוריון (16 באוקטובר 1886 - ו' בכסלו תשל"ד) היה מדינאי ישראלי וראש הממשלה הראשון של מדינת ישראל.'''
oracle(sentence)
Output:
[
{
"entity_group": "PER",
"score": 0.9998988,
"word": "דוד בן - גוריון",
"start": 0,
"end": 13
},
{
"entity_group": "TIMEX",
"score": 0.99989706,
"word": "16 באוקטובר 1886",
"start": 15,
"end": 31
},
{
"entity_group": "TIMEX",
"score": 0.99991614,
"word": "ו' בכסלו תשל\"ד",
"start": 34,
"end": 48
},
{
"entity_group": "TTL",
"score": 0.9931756,
"word": "וראש הממשלה",
"start": 68,
"end": 79
},
{
"entity_group": "GPE",
"score": 0.9995702,
"word": "ישראל",
"start": 96,
"end": 101
}
]
If you use DictaBERT in your research, please cite DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew
BibTeX:
@misc{shmidman2023dictabert,
title={DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew},
author={Shaltiel Shmidman and Avi Shmidman and Moshe Koppel},
year={2023},
eprint={2308.16687},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
This work is licensed under a Creative Commons Attribution 4.0 International License.