SGPT-5.8B-weightedmean-msmarco-specb-bitfit
Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
Training
The model was trained with the parameters:
DataLoader:
torch.utils.data.dataloader.DataLoader
of length 249592 with parameters:
{'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
Loss:
sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss
with parameters:
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
Parameters of the fit()-Method:
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTJModel
(1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
Citing & Authors
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
- Downloads last month
- 262
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Spaces using Muennighoff/SGPT-5.8B-weightedmean-msmarco-specb-bitfit 8
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported69.224
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported32.047
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported63.257
- accuracy on MTEB AmazonPolarityClassificationtest set self-reported71.261
- ap on MTEB AmazonPolarityClassificationtest set self-reported66.163
- f1 on MTEB AmazonPolarityClassificationtest set self-reported70.897
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported39.192
- f1 on MTEB AmazonReviewsClassification (en)test set self-reported38.581
- map_at_1 on MTEB ArguAnatest set self-reported27.312
- map_at_10 on MTEB ArguAnatest set self-reported42.620