BM25S Index
This is a BM25S index created with the bm25s
library (version 0.2.3
), an ultra-fast implementation of BM25. It can be used for lexical retrieval tasks.
BM25S Related Links:
Installation
You can install the bm25s
library with pip
:
pip install "bm25s==0.2.3"
# Include extra dependencies like stemmer
pip install "bm25s[full]==0.2.3"
# For huggingface hub usage
pip install huggingface_hub
Loading a bm25s
index
You can use this index for information retrieval tasks. Here is an example:
import bm25s
from bm25s.hf import BM25HF
# Load the index
retriever = BM25HF.load_from_hub("ylkhayat/bm25s-caselaw-us-and-veterans")
# You can retrieve now
query = "a cat is a feline"
results = retriever.retrieve(bm25s.tokenize(query), k=3)
Saving a bm25s
index
You can save a bm25s
index to the Hugging Face Hub. Here is an example:
import bm25s
from bm25s.hf import BM25HF
corpus = [
"a cat is a feline and likes to purr",
"a dog is the human's best friend and loves to play",
"a bird is a beautiful animal that can fly",
"a fish is a creature that lives in water and swims",
]
retriever = BM25HF(corpus=corpus)
retriever.index(bm25s.tokenize(corpus))
token = None # You can get a token from the Hugging Face website
retriever.save_to_hub("ylkhayat/bm25s-caselaw-us-and-veterans", token=token)
Advanced usage
You can leverage more advanced features of the BM25S library during load_from_hub
:
# Load corpus and index in memory-map (mmap=True) to reduce memory
retriever = BM25HF.load_from_hub("ylkhayat/bm25s-caselaw-us-and-veterans", load_corpus=True, mmap=True)
# Load a different branch/revision
retriever = BM25HF.load_from_hub("ylkhayat/bm25s-caselaw-us-and-veterans", revision="main")
# Change directory where the local files should be downloaded
retriever = BM25HF.load_from_hub("ylkhayat/bm25s-caselaw-us-and-veterans", local_dir="/path/to/dir")
# Load private repositories with a token:
retriever = BM25HF.load_from_hub("ylkhayat/bm25s-caselaw-us-and-veterans", token=token)
Tokenizer
If you have saved a Tokenizer
object with the index using the following approach:
from bm25s.hf import TokenizerHF
token = "your_hugging_face_token"
tokenizer = TokenizerHF(corpus=corpus, stopwords="english")
tokenizer.save_to_hub("ylkhayat/bm25s-caselaw-us-and-veterans", token=token)
# and stopwords too
tokenizer.save_stopwords_to_hub("ylkhayat/bm25s-caselaw-us-and-veterans", token=token)
Then, you can load the tokenizer using the following code:
from bm25s.hf import TokenizerHF
tokenizer = TokenizerHF(corpus=corpus, stopwords=[])
tokenizer.load_vocab_from_hub("ylkhayat/bm25s-caselaw-us-and-veterans", token=token)
tokenizer.load_stopwords_from_hub("ylkhayat/bm25s-caselaw-us-and-veterans", token=token)
Stats
This dataset was created using the following data:
Statistic | Value |
---|---|
Number of documents | 366752 |
Number of tokens | 57736416 |
Average tokens per document | 157.43 |
Parameters
The index was created with the following parameters:
Parameter | Value |
---|---|
k1 | 1.5 |
b | 0.75 |
delta | 0.5 |
method | lucene |
idf method | lucene |
Citation
To cite bm25s
, please use the following bibtex:
@misc{lu_2024_bm25s,
title={BM25S: Orders of magnitude faster lexical search via eager sparse scoring},
author={Xing Han Lù},
year={2024},
eprint={2407.03618},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2407.03618},
}
- Downloads last month
- 4