Rerank
Collection
1 item
•
Updated
•
1
sentence-transformers
transformers
Install VnCoreNLP
to word segment:
pip install py_vncorenlp
Install sentence-transformers
(recommend) - Usage:
pip install sentence-transformers
Install transformers
(optional) - Usage:
pip install transformers
import py_vncorenlp
py_vncorenlp.download_model(save_dir='/absolute/path/to/vncorenlp')
rdrsegmenter = py_vncorenlp.VnCoreNLP(annotators=["wseg"], save_dir='/absolute/path/to/vncorenlp')
query = "Trường UIT là gì?"
sentences = [
"Trường Đại học Công nghệ Thông tin có tên tiếng Anh là University of Information Technology (viết tắt là UIT) là thành viên của Đại học Quốc Gia TP.HCM.",
"Trường Đại học Kinh tế – Luật (tiếng Anh: University of Economics and Law – UEL) là trường đại học đào tạo và nghiên cứu khối ngành kinh tế, kinh doanh và luật hàng đầu Việt Nam.",
"Quĩ uỷ thác đầu tư (tiếng Anh: Unit Investment Trusts; viết tắt: UIT) là một công ty đầu tư mua hoặc nắm giữ một danh mục đầu tư cố định"
]
tokenized_query = rdrsegmenter.word_segment(query)
tokenized_sentences = [rdrsegmenter.word_segment(sent) for sent in sentences]
tokenized_pairs = [[tokenized_query, sent] for sent in tokenized_sentences]
MODEL_ID = 'itdainb/PhoRanker'
MAX_LENGTH = 256
from sentence_transformers import CrossEncoder
model = CrossEncoder(MODEL_ID, max_length=MAX_LENGTH)
# For fp16 usage
model.model.half()
scores = model.predict(tokenized_pairs)
# 0.982, 0.2444, 0.9253
print(scores)
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained(MODEL_ID)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
# For fp16 usage
model.half()
features = tokenizer(tokenized_pairs, padding=True, truncation="longest_first", return_tensors="pt", max_length=MAX_LENGTH)
model.eval()
with torch.no_grad():
model_predictions = model(**features, return_dict=True)
logits = model_predictions.logits
logits = torch.nn.Sigmoid()(logits)
scores = [logit[0] for logit in logits]
# 0.9819, 0.2444, 0.9253
print(scores)
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the MS MMarco Passage Reranking - Vi - Dev dataset.
Model-Name | NDCG@3 | MRR@3 | NDCG@5 | MRR@5 | NDCG@10 | MRR@10 | Docs / Sec |
---|---|---|---|---|---|---|---|
itdainb/PhoRanker | 0.6625 | 0.6458 | 0.7147 | 0.6731 | 0.7422 | 0.6830 | 15 |
amberoad/bert-multilingual-passage-reranking-msmarco | 0.4634 | 0.5233 | 0.5041 | 0.5383 | 0.5416 | 0.5523 | 22 |
kien-vu-uet/finetuned-phobert-passage-rerank-best-eval | 0.0963 | 0.0883 | 0.1396 | 0.1131 | 0.1681 | 0.1246 | 15 |
BAAI/bge-reranker-v2-m3 | 0.6087 | 0.5841 | 0.6513 | 0.6062 | 0.6872 | 0.62091 | 3.51 |
BAAI/bge-reranker-v2-gemma | 0.6088 | 0.5908 | 0.6446 | 0.6108 | 0.6785 | 0.6249 | 1.29 |
Note: Runtime was computed on a A100 GPU with fp16.
If you find this work useful and would like to support its continued development, here are a few ways you can help:
Please cite as
@misc{PhoRanker,
title={PhoRanker: A Cross-encoder Model for Vietnamese Text Ranking},
author={Dai Nguyen Ba ({ORCID:0009-0008-8559-3154})},
year={2024},
publisher={Huggingface},
journal={huggingface repository},
howpublished={\url{https://huggingface.co/itdainb/PhoRanker}},
}