Integrate with Sentence Transformers

#3
by tomaarsen HF staff - opened

Hello @jxm (& @srush )!

Preface

Thanks for improving models in the <300M parameter range, I always feel like this area doesn't get enough love because it gets buried by larger models in the leaderboards. In practice, I think small models are extremely valuable - there's a reason the most popular Sentence Transformer models are tiny.

Pull Request overview

  • Integrate this model with Sentence Transformers (inference only)
  • Update the README to show the usage with Sentence Transformers
  • Update the Transformers snippet slightly (e.g. update the device)
  • Add transformers/sentence-transformers tags to the README

Details

I've integrated this model in Sentence Transformers by relying on the new Custom Modules feature, specifically also the Keyword argument passthrough part. In short, this allows model authors to create modules used in Sentence Transformers, which can replace some of the usual ones. In this case, I've replaced the Transformer and Pooling modules, and only added a new custom module that calls model.first_stage_model and model.second_stage_model. Through the Keyword argument passthrough feature, people can pass custom kwargs through SentenceTransformer.encode and they'll be passed on to the forward of the custom module.

Additionally, I included the instructions/prompts as prompts in one of the config files, allowing users to just use prompt_name="query" rather than worrying about adding exactly the correct prompt themselves.

This means that the inference of your model becomes as simple as:

from sentence_transformers import SentenceTransformer

# 1. Load the Sentence Transformer model
model = SentenceTransformer(".", trust_remote_code=True)

...

# 3. First stage: embed the context docs
dataset_embeddings = model.encode(
    context_docs,
    prompt_name="document",
    convert_to_tensor=True,
)

# 4. Second stage: embed the docs and queries
doc_embeddings = model.encode(
    docs,
    prompt_name="document",
    dataset_embeddings=dataset_embeddings,
    convert_to_tensor=True,
)
query_embeddings = model.encode(
    queries,
    prompt_name="query",
    dataset_embeddings=dataset_embeddings,
    convert_to_tensor=True,
)

(See the full snippet in the README.md diff)

I chose to make Sentence Transformers the default option in the README because the usage is heavily simplified, but you're obviously always free to change this up.

cc @mrm8488 @osanseviero @nielsr as you expressed an interest in this model/integration

  • Tom Aarsen
tomaarsen changed pull request status to open
Owner

This is super impressive. Thanks for implementing this; I didn't actually know if it was even possible. This should be very useful for a lot of people. Great work!

jxm changed pull request status to merged

Sign up or log in to comment