gte-large-sparse
This is the sparse ONNX variant of the gte-large embeddings model created with DeepSparse Optimum for ONNX export/inference and Neural Magic's Sparsify for one-shot quantization (INT8) and unstructured pruning 50%.
Current list of sparse and quantized gte ONNX models:
Links | Sparsification Method |
---|---|
zeroshot/gte-large-sparse | Quantization (INT8) & 50% Pruning |
zeroshot/gte-large-quant | Quantization (INT8) |
zeroshot/gte-base-sparse | Quantization (INT8) & 50% Pruning |
zeroshot/gte-base-quant | Quantization (INT8) |
zeroshot/gte-small-sparse | Quantization (INT8) & 50% Pruning |
zeroshot/gte-small-quant | Quantization (INT8) |
pip install -U deepsparse-nightly[sentence_transformers]
from deepsparse.sentence_transformers import SentenceTransformer
model = SentenceTransformer('zeroshot/gte-large-sparse', export=False)
# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding.shape)
print("")
For further details regarding DeepSparse & Sentence Transformers integration, refer to the DeepSparse README.
For general questions on these models and sparsification methods, reach out to the engineering team on our community Slack.
- Downloads last month
- 149
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Spaces using zeroshot/gte-large-sparse 2
Evaluation results
- cos_sim_pearson on MTEB BIOSSEStest set self-reported88.643
- cos_sim_spearman on MTEB BIOSSEStest set self-reported85.834
- euclidean_pearson on MTEB BIOSSEStest set self-reported86.861
- euclidean_spearman on MTEB BIOSSEStest set self-reported85.616
- manhattan_pearson on MTEB BIOSSEStest set self-reported86.690
- manhattan_spearman on MTEB BIOSSEStest set self-reported85.603
- cos_sim_pearson on MTEB SICK-Rtest set self-reported85.233
- cos_sim_spearman on MTEB SICK-Rtest set self-reported79.001
- euclidean_pearson on MTEB SICK-Rtest set self-reported83.480
- euclidean_spearman on MTEB SICK-Rtest set self-reported78.954