SentenceTransformer based on BAAI/bge-small-en-v1.5
This is a sentence-transformers model finetuned from BAAI/bge-small-en-v1.5. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-small-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("SamagraDataGov/embedding_finetuned_test")
# Run inference
sentences = [
'Who is considered as the nodal agency for engagement with the Ministry of Agriculture and Farmers Welfare and Insurance Companies?',
"'8.1 CSCs under Ministry of Electronics and Information Technology (MeITY) have been engaged to enrol non-loanee farmers. The Insurance Companies are required to enter into a separate agreement with CSC and pay service charges as fixed by DAC&FW, GOI per farmer per village per season. No other agreement or payment is required to be made for this purpose. Nodal agency for engagement with Ministry of Agriculture and Farmers Welfare and Insurance Companies will be CSC-SPV, a company established under MeITY for carrying out e-governance initiatives of GoI. 8.2 No charges/fee shall be borne or paid by the farmers being enrolled through CSCs i.e. CSC-SPV and CSC-VLE 8.3 As per IRDA circular, no separate qualification/certification will be required for the VLEs of CSCs to facilitate enrolment of non-loanee farmers. 8.4 All empanelled Insurance Companies will compulsorily be required to enter into an agreement with CSC for enrolment of non-loanee farmers and for provision of other defined services to farmers. 8.5 Other designated intermediaries may be linked with the Portal in due course. 8.6 Empanelled Insurance Companies have to necessarily register on the portal and submit list and details of agents/intermediaries engaged for enrolment of non-loanee farmers in the beginning of each season within 10 days of award of work in the State. Further all agents/intermediaries have to work strictly as per the provisions of the Scheme and IRDA regulations'",
"' 13.4 Laxmanrao Imandar National Academy for Co-operative Research & Development (LINAC), Gurugram promoted by NCDC is designated as Nodal Training Institution at central level for FPOs registered under Co-operative Societies Act and promoted by NCDC. The LINAC will work in partnership with other reputed national and regional training institutions like NIAM, VAMNICOM, MANAGE, NIRD, NCCT, IRMA, ASCI, State and Central Agriculture Universities, KVK, very reputed National level Management and Skill Development Institutions/Universities etc. The LINAC in consultation with NCDC and DAC&FW will prepare a training module and training schedule for the ensuing year, which will be got approved by N-PMAFSC. As regards training expenses, in case of LINAC being nodal agency, the LINAC through NCDC will claim the expenses from DAC&FW and will also submit the utilization certificate through NCDC after the training programme is over. 13.5 DAC&FW in due course may also identify and designate other training institute(s) as additional Nodal Training Institute at central level, which will undertake training and skill development partnering with other national and regional level institutes. 13.6 The central Nodal Training Institutes will ensure that training programme be held preferably in same State/UT wherein FPO trainees located are proposed to participate to reduce the burden on transportation(TA/DA) cost. While formulating the training schedule, Nodal Training Institutes will ensure that BoDs, CEOs/Managers and other stakeholders etc. are trained twice in a year. Nodal Training Institutes will have to make boarding and lodging arrangements for the trainees and will also reimburse to and fro journey tickets to the extent of sleeper class train tickets and/or ordinary bus fare. Nodal Training Institutions will also evolve methodology to monitor and track the performance of trainees and their FPO organization to ensure effectiveness of training being provided.'",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Dataset:
val_evaluator
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.51 |
cosine_accuracy@5 | 0.9 |
cosine_accuracy@10 | 0.96 |
cosine_precision@1 | 0.51 |
cosine_precision@5 | 0.18 |
cosine_precision@10 | 0.096 |
cosine_recall@1 | 0.51 |
cosine_recall@5 | 0.9 |
cosine_recall@10 | 0.96 |
cosine_ndcg@5 | 0.7319 |
cosine_ndcg@10 | 0.7503 |
cosine_ndcg@100 | 0.759 |
cosine_mrr@5 | 0.6745 |
cosine_mrr@10 | 0.6815 |
cosine_mrr@100 | 0.6834 |
cosine_map@100 | 0.6834 |
dot_accuracy@1 | 0.51 |
dot_accuracy@5 | 0.9 |
dot_accuracy@10 | 0.96 |
dot_precision@1 | 0.51 |
dot_precision@5 | 0.18 |
dot_precision@10 | 0.096 |
dot_recall@1 | 0.51 |
dot_recall@5 | 0.9 |
dot_recall@10 | 0.96 |
dot_ndcg@5 | 0.7319 |
dot_ndcg@10 | 0.7503 |
dot_ndcg@100 | 0.759 |
dot_mrr@5 | 0.6745 |
dot_mrr@10 | 0.6815 |
dot_mrr@100 | 0.6834 |
dot_map@100 | 0.6834 |
Training Details
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 32per_device_eval_batch_size
: 32learning_rate
: 1e-05weight_decay
: 0.01num_train_epochs
: 1.0warmup_ratio
: 0.1load_best_model_at_end
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 32per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 1e-05weight_decay
: 0.01adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1.0max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseeval_use_gather_object
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | loss | val_evaluator_cosine_map@100 |
---|---|---|---|---|
0.5172 | 15 | 2.0908 | 1.008 | 0.6834 |
1.0 | 29 | - | 1.0080 | 0.6834 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.1
- Transformers: 4.43.4
- PyTorch: 2.4.1+cu121
- Accelerate: 0.33.0
- Datasets: 2.21.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
GISTEmbedLoss
@misc{solatorio2024gistembed,
title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning},
author={Aivin V. Solatorio},
year={2024},
eprint={2402.16829},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for SamagraDataGov/embedding_finetuned_test
Base model
BAAI/bge-small-en-v1.5Evaluation results
- Cosine Accuracy@1 on val evaluatorself-reported0.510
- Cosine Accuracy@5 on val evaluatorself-reported0.900
- Cosine Accuracy@10 on val evaluatorself-reported0.960
- Cosine Precision@1 on val evaluatorself-reported0.510
- Cosine Precision@5 on val evaluatorself-reported0.180
- Cosine Precision@10 on val evaluatorself-reported0.096
- Cosine Recall@1 on val evaluatorself-reported0.510
- Cosine Recall@5 on val evaluatorself-reported0.900
- Cosine Recall@10 on val evaluatorself-reported0.960
- Cosine Ndcg@5 on val evaluatorself-reported0.732