Edit model card

E5-large-en-ru

Model info

This is vocabulary pruned version of intfloat/multilingual-e5-large.

Uses only russian and english tokens.

Size

intfloat/multilingual-e5-large d0rj/e5-large-en-ru
Model size (MB) 2135.82 1394.8
Params (count) 559,890,946 365,638,14
Word embeddings dim 256,002,048 61,749,248

Performance

Equal performance on SberQuAD dev benchmark.

Metric on SberQuAD (4122 questions) intfloat/multilingual-e5-large d0rj/e5-large-en-ru
recall@3 0.787239204269772 0.7882096069868996
map@3 0.7230713245997101 0.723192624939351
mrr@3 0.7241630276564784 0.7243651948892132
recall@5 0.8277535177098496 0.8284813197476953
map@5 0.7301603186155587 0.7302573588872716
mrr@5 0.7334667637069385 0.7335718906679607
recall@10 0.8716642406598738 0.871421639980592
map@10 0.7314774917730316 0.7313000338687417
mrr@10 0.7392223685527911 0.7391814537556898

Usage

  • Use dot product distance for retrieval.

  • Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.

  • Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.

  • Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.

transformers

Direct usage

import torch.nn.functional as F
from torch import Tensor
from transformers import XLMRobertaTokenizer, XLMRobertaModel


def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
    last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
    return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]


input_texts = [
  'query: How does a corporate website differ from a business card website?',
  'query: Где был создан первый троллейбус?',
  'passage: The first trolleybus was created in Germany by engineer Werner von Siemens, probably influenced by the idea of his brother, Dr. Wilhelm Siemens, who lived in England, expressed on May 18, 1881 at the twenty-second meeting of the Royal Scientific Society. The electrical circuit was carried out by an eight-wheeled cart (Kontaktwagen) rolling along two parallel contact wires. The wires were located quite close to each other, and in strong winds they often overlapped, which led to short circuits. An experimental trolleybus line with a length of 540 m (591 yards), opened by Siemens & Halske in the Berlin suburb of Halensee, operated from April 29 to June 13, 1882.',
  'passage: Корпоративный сайт — содержит полную информацию о компании-владельце, услугах/продукции, событиях в жизни компании. Отличается от сайта-визитки и представительского сайта полнотой представленной информации, зачастую содержит различные функциональные инструменты для работы с контентом (поиск и фильтры, календари событий, фотогалереи, корпоративные блоги, форумы). Может быть интегрирован с внутренними информационными системами компании-владельца (КИС, CRM, бухгалтерскими системами). Может содержать закрытые разделы для тех или иных групп пользователей — сотрудников, дилеров, контрагентов и пр.',
]

tokenizer = XLMRobertaTokenizer.from_pretrained('d0rj/e5-large-en-ru', use_cache=False)
model = XLMRobertaModel.from_pretrained('d0rj/e5-large-en-ru', use_cache=False)

batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')

outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])

embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
# [[68.59542846679688, 81.75910949707031], [80.36100769042969, 64.77748107910156]]

Pipeline

from transformers import pipeline


pipe = pipeline('feature-extraction', model='d0rj/e5-large-en-ru')
embeddings = pipe(input_texts, return_tensors=True)
embeddings[0].size()
# torch.Size([1, 17, 1024])

sentence-transformers

from sentence_transformers import SentenceTransformer


sentences = [
    'query: Что такое круглые тензоры?',
    'passage: Abstract: we introduce a novel method for compressing round tensors based on their inherent radial symmetry. We start by generalising PCA and eigen decomposition on round tensors...',
]

model = SentenceTransformer('d0rj/e5-large-en-ru')
embeddings = model.encode(sentences, convert_to_tensor=True)
embeddings.size()
# torch.Size([2, 1024])
Downloads last month
251
Safetensors
Model size
366M params
Tensor type
I64
·
F32
·
Inference Examples
Inference API (serverless) does not yet support transformers models for this pipeline type.

Spaces using d0rj/e5-large-en-ru 2

Collection including d0rj/e5-large-en-ru

Evaluation results