Edit model card

SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2

This is a sentence-transformers model finetuned from sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("DashReza7/sentence-transformers_paraphrase-multilingual-MiniLM-L12-v2_FINETUNED_on_torob_data_v4")
# Run inference
sentences = [
    'هایومکس',
    'ژل هایومکس ولومایزر 2 سی سی',
    'دزدگیر پاناتک مدل P-CA501 دزدگیر پاناتک P-CA501-2 دزدگیر پاناتک مدل P-CA501-2',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Binary Classification

Metric Value
cosine_accuracy 0.8396
cosine_accuracy_threshold 0.7624
cosine_f1 0.8952
cosine_f1_threshold 0.7235
cosine_precision 0.8454
cosine_recall 0.9511
cosine_ap 0.9296
dot_accuracy 0.8128
dot_accuracy_threshold 18.1649
dot_f1 0.8798
dot_f1_threshold 17.5963
dot_precision 0.8227
dot_recall 0.9454
dot_ap 0.9138
manhattan_accuracy 0.8363
manhattan_accuracy_threshold 56.6106
manhattan_f1 0.8929
manhattan_f1_threshold 60.147
manhattan_precision 0.8404
manhattan_recall 0.9525
manhattan_ap 0.9275
euclidean_accuracy 0.8367
euclidean_accuracy_threshold 3.6917
euclidean_f1 0.8933
euclidean_f1_threshold 3.6917
euclidean_precision 0.8525
euclidean_recall 0.9383
euclidean_ap 0.9275
max_accuracy 0.8396
max_accuracy_threshold 56.6106
max_f1 0.8952
max_f1_threshold 60.147
max_precision 0.8525
max_recall 0.9525
max_ap 0.9296

Binary Classification

Metric Value
cosine_accuracy 0.8314
cosine_accuracy_threshold 0.7449
cosine_f1 0.8898
cosine_f1_threshold 0.7428
cosine_precision 0.8502
cosine_recall 0.9332
cosine_ap 0.9253
dot_accuracy 0.8083
dot_accuracy_threshold 18.1676
dot_f1 0.8762
dot_f1_threshold 17.1061
dot_precision 0.8156
dot_recall 0.9464
dot_ap 0.9079
manhattan_accuracy 0.8277
manhattan_accuracy_threshold 53.9454
manhattan_f1 0.8875
manhattan_f1_threshold 59.6646
manhattan_precision 0.8337
manhattan_recall 0.9487
manhattan_ap 0.9231
euclidean_accuracy 0.8274
euclidean_accuracy_threshold 3.4869
euclidean_f1 0.8875
euclidean_f1_threshold 3.7965
euclidean_precision 0.8363
euclidean_recall 0.9452
euclidean_ap 0.9232
max_accuracy 0.8314
max_accuracy_threshold 53.9454
max_f1 0.8898
max_f1_threshold 59.6646
max_precision 0.8502
max_recall 0.9487
max_ap 0.9253

Training Details

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • learning_rate: 2e-05
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • fp16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss max_ap
None 0 - - 0.8131
0.1558 500 0.0262 - -
0.3116 1000 0.0184 - -
0.4674 1500 0.0173 - -
0.6232 2000 0.0164 0.0155 0.9253
0.7791 2500 0.016 - -
0.9349 3000 0.0155 - -
1.0 3209 - - 0.9296

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.42.4
  • PyTorch: 2.4.0+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

ContrastiveLoss

@inproceedings{hadsell2006dimensionality,
    author={Hadsell, R. and Chopra, S. and LeCun, Y.},
    booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)}, 
    title={Dimensionality Reduction by Learning an Invariant Mapping}, 
    year={2006},
    volume={2},
    number={},
    pages={1735-1742},
    doi={10.1109/CVPR.2006.100}
}
Downloads last month
6
Safetensors
Model size
118M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for DashReza7/sentence-transformers_paraphrase-multilingual-MiniLM-L12-v2_FINETUNED_on_torob_data_v4

Evaluation results