Edit model card

rubert-entity-embedder

RuBERT Entity Embedder (Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters) is based on DeepPavlov's RuBERT-base-cased. It is fine-tuned as a Siamese neural network to build effective token embeddings of 29 entity classes on Russian [1]. The fine-tuning procedure is the first stage of two-stage fine-tuning of a BERT-based language model for more robust named entity recognition [2].

[1]: Artemova, E., Zmeev, M., Loukachevitch, N.V., Rozhkov, I.S., Batura, T., Ivanov, V., & Tutubalina, E. (2022). RuNNE-2022 Shared Task: Recognizing Nested Named Entities. Proceedings of the International Conference “Dialogue 2022”. https://www.dialog-21.ru/media/5747/artemovaelplusetal109.pdf

[2]: Bondarenko, I. (2022). Contrastive fine-tuning to improve generalization in deep NER. Proceedings of the International Conference “Dialogue 2022”. https://www.dialog-21.ru/media/5751/bondarenkoi113.pdf

Downloads last month
31
Safetensors
Model size
178M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.