keeping data local
Is there a way for me to use the pretrained model locally so my private data does not get sent over API ?
yes you can download and run the model locally: https://huggingface.co/nomic-ai/nomic-embed-text-v1#transformers
Sorry for the stupid question . I tried doing this and turned off my internet and got an error . Is internet required to verify access through the API ?
You need to download the model first via the internet. Then you can run the model locally and no data will be shared since it's all running locally. If you wanted to use the API, you data will be sent via a request but no data is stored.
Ok thanks for the clarification
Using AutoModel and SentenceTransformer yields different results vs placing calls to the Nomic API as follows:
from nomic import embed
sentences = ['search_query: What is TSNE?', 'search_query: Who is Laurens van der Maaten?']
output = embed.text(
texts=sentences,
model='nomic-embed-text-v1',dimensionality=768,
task_type='search_document'
)
print(output)
Would you know why?
One cause could be a difference in precision. Our model that we serve is in fp16 and has some optimized kernels that might have slight differences than running locally. in practice the differences should be very small
The differences seem quite large. Perhaps it might be because of pooling / normalization?
from nomic import embed
sentences = ['search_query: What is TSNE?']
output = embed.text(
texts=sentences,
model='nomic-embed-text-v1',
task_type='search_document'
)
print(output['embeddings'][0][:5])
[0.0062446594, 0.068847656, -0.010635376, -0.037719727, 0.018844604]
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("nomic-ai/nomic-embed-text-v1", trust_remote_code=True)
embeddings = model.encode(sentences)
print(embeddings[0][:5])
[ 0.01095135 0.05741467 -0.01103645 -0.05894973 0.00402902]
oh for the api you don’t need to add the prefixes, we handle that for you
Oh okay, that explains it - thank you for the quick response!
I can't figure out how to locally generate same embeddings as api
using api (if I specify inference_mode as local, it outputs same result):
from nomic import embed
out = embed.text(texts=['What is TSNE?'], model='nomic-embed-text-v1.5')
print(out['embeddings'][0][:5])
[-0.0061798096, 0.040924072, -0.1315918, -0.033935547, 0.044036865]
using sentence_transformers (or transformers or ollama gives same result):
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("nomic-ai/nomic-embed-text-v1.5", trust_remote_code=True)
embeddings = model.encode(["search_query: What is TSNE?"])
print(embeddings[:, :5])
array([[-0.08891347, 1.2341348 , -4.046868 , -1.0364417 , 0.85301745]],
dtype=float32)
embeddings = model.encode(["search_document: What is TSNE?"])
print(embeddings[:, :5])
array([[-0.12831774, 0.89559835, -2.836049 , -0.7255756 , 0.95987666]],
dtype=float32)
so ollama, transformers and sentence_transformers are consistent generating same embeddings, but they are different from nomic.embed's output. @zpn @arvind-kumar can you help me to get consistent result ?
Hello!
I haven't verified this for you, but I think you might get the same results if you normalize the Sentence Transformers embeddings, e.g. by passing normalize_embeddings=True
to model.encode
.
- Tom Aarsen
Thanks @tomaarsen ! that is it, after normalization, they are the same.