Edit model card

SentenceTransformer based on tomaarsen/mpnet-base-all-nli-triplet

This is a sentence-transformers model finetuned from tomaarsen/mpnet-base-all-nli-triplet on the Omartificial-Intelligence-Space/arabic-n_li-triplet dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: tomaarsen/mpnet-base-all-nli-triplet
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • Omartificial-Intelligence-Space/arabic-n_li-triplet

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Omartificial-Intelligence-Space/mpnet-base-all-nli-triplet-Arabic-mpnet_base")
# Run inference
sentences = [
    'يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة.',
    'ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه',
    'الشاب نائم بينما الأم تقود ابنتها إلى الحديقة',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.6699
spearman_cosine 0.6757
pearson_manhattan 0.6943
spearman_manhattan 0.684
pearson_euclidean 0.6973
spearman_euclidean 0.6873
pearson_dot 0.5534
spearman_dot 0.5422
pearson_max 0.6973
spearman_max 0.6873

Semantic Similarity

Metric Value
pearson_cosine 0.6628
spearman_cosine 0.6703
pearson_manhattan 0.6917
spearman_manhattan 0.6816
pearson_euclidean 0.6949
spearman_euclidean 0.6853
pearson_dot 0.5229
spearman_dot 0.5114
pearson_max 0.6949
spearman_max 0.6853

Semantic Similarity

Metric Value
pearson_cosine 0.6368
spearman_cosine 0.6513
pearson_manhattan 0.6832
spearman_manhattan 0.6746
pearson_euclidean 0.6844
spearman_euclidean 0.676
pearson_dot 0.4266
spearman_dot 0.4179
pearson_max 0.6844
spearman_max 0.676

Semantic Similarity

Metric Value
pearson_cosine 0.6148
spearman_cosine 0.6355
pearson_manhattan 0.6731
spearman_manhattan 0.6653
pearson_euclidean 0.6764
spearman_euclidean 0.6691
pearson_dot 0.3513
spearman_dot 0.3445
pearson_max 0.6764
spearman_max 0.6691

Semantic Similarity

Metric Value
pearson_cosine 0.5789
spearman_cosine 0.6081
pearson_manhattan 0.6579
spearman_manhattan 0.6519
pearson_euclidean 0.663
spearman_euclidean 0.6571
pearson_dot 0.2403
spearman_dot 0.2331
pearson_max 0.663
spearman_max 0.6571

Training Details

Training Dataset

Omartificial-Intelligence-Space/arabic-n_li-triplet

  • Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
  • Size: 557,850 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 23.93 tokens
    • max: 155 tokens
    • min: 9 tokens
    • mean: 29.62 tokens
    • max: 117 tokens
    • min: 13 tokens
    • mean: 33.95 tokens
    • max: 149 tokens
  • Samples:
    anchor positive negative
    شخص على حصان يقفز فوق طائرة معطلة شخص في الهواء الطلق، على حصان. شخص في مطعم، يطلب عجة.
    أطفال يبتسمون و يلوحون للكاميرا هناك أطفال حاضرون الاطفال يتجهمون
    صبي يقفز على لوح التزلج في منتصف الجسر الأحمر. الفتى يقوم بخدعة التزلج الصبي يتزلج على الرصيف
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Evaluation Dataset

Omartificial-Intelligence-Space/arabic-n_li-triplet

  • Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
  • Size: 6,584 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 49.5 tokens
    • max: 246 tokens
    • min: 8 tokens
    • mean: 23.66 tokens
    • max: 103 tokens
    • min: 9 tokens
    • mean: 25.33 tokens
    • max: 82 tokens
  • Samples:
    anchor positive negative
    امرأتان يتعانقان بينما يحملان حزمة إمرأتان يحملان حزمة الرجال يتشاجرون خارج مطعم
    طفلين صغيرين يرتديان قميصاً أزرق، أحدهما يرتدي الرقم 9 والآخر يرتدي الرقم 2 يقفان على خطوات خشبية في الحمام ويغسلان أيديهما في المغسلة. طفلين يرتديان قميصاً مرقماً يغسلون أيديهم طفلين يرتديان سترة يذهبان إلى المدرسة
    رجل يبيع الدونات لعميل خلال معرض عالمي أقيم في مدينة أنجليس رجل يبيع الدونات لعميل امرأة تشرب قهوتها في مقهى صغير
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Click to expand
Epoch Step Training Loss sts-test-128_spearman_cosine sts-test-256_spearman_cosine sts-test-512_spearman_cosine sts-test-64_spearman_cosine sts-test-768_spearman_cosine
0.0229 200 21.5318 - - - - -
0.0459 400 17.2344 - - - - -
0.0688 600 15.393 - - - - -
0.0918 800 13.7897 - - - - -
0.1147 1000 13.534 - - - - -
0.1377 1200 12.2683 - - - - -
0.1606 1400 10.9271 - - - - -
0.1835 1600 11.071 - - - - -
0.2065 1800 10.0153 - - - - -
0.2294 2000 9.8463 - - - - -
0.2524 2200 10.0194 - - - - -
0.2753 2400 9.8371 - - - - -
0.2983 2600 9.6315 - - - - -
0.3212 2800 8.9858 - - - - -
0.3442 3000 9.1876 - - - - -
0.3671 3200 8.8028 - - - - -
0.3900 3400 8.6075 - - - - -
0.4130 3600 8.4285 - - - - -
0.4359 3800 8.1258 - - - - -
0.4589 4000 8.2508 - - - - -
0.4818 4200 7.8037 - - - - -
0.5048 4400 7.7133 - - - - -
0.5277 4600 7.5006 - - - - -
0.5506 4800 7.7025 - - - - -
0.5736 5000 7.7593 - - - - -
0.5965 5200 7.6305 - - - - -
0.6195 5400 7.7502 - - - - -
0.6424 5600 7.5624 - - - - -
0.6654 5800 7.5287 - - - - -
0.6883 6000 7.4261 - - - - -
0.7113 6200 7.239 - - - - -
0.7342 6400 7.1631 - - - - -
0.7571 6600 7.6865 - - - - -
0.7801 6800 7.6124 - - - - -
0.8030 7000 6.9936 - - - - -
0.8260 7200 6.7331 - - - - -
0.8489 7400 6.4542 - - - - -
0.8719 7600 6.1994 - - - - -
0.8948 7800 5.9798 - - - - -
0.9177 8000 5.7808 - - - - -
0.9407 8200 5.6952 - - - - -
0.9636 8400 5.5082 - - - - -
0.9866 8600 5.4421 - - - - -
1.0095 8800 3.0309 - - - - -
1.0026 9000 1.1835 - - - - -
1.0256 9200 8.1196 - - - - -
1.0485 9400 8.0326 - - - - -
1.0715 9600 8.5028 - - - - -
1.0944 9800 7.6923 - - - - -
1.1174 10000 8.029 - - - - -
1.1403 10200 7.5052 - - - - -
1.1632 10400 7.1177 - - - - -
1.1862 10600 6.9594 - - - - -
1.2091 10800 6.6662 - - - - -
1.2321 11000 6.6903 - - - - -
1.2550 11200 6.9523 - - - - -
1.2780 11400 6.676 - - - - -
1.3009 11600 6.7141 - - - - -
1.3238 11800 6.568 - - - - -
1.3468 12000 6.8938 - - - - -
1.3697 12200 6.3745 - - - - -
1.3927 12400 6.2513 - - - - -
1.4156 12600 6.2589 - - - - -
1.4386 12800 6.1388 - - - - -
1.4615 13000 6.1835 - - - - -
1.4845 13200 5.9004 - - - - -
1.5074 13400 5.7891 - - - - -
1.5303 13600 5.6184 - - - - -
1.5533 13800 5.9762 - - - - -
1.5762 14000 5.9737 - - - - -
1.5992 14200 5.8563 - - - - -
1.6221 14400 5.8904 - - - - -
1.6451 14600 5.8484 - - - - -
1.6680 14800 5.8906 - - - - -
1.6909 15000 5.7613 - - - - -
1.7139 15200 5.5744 - - - - -
1.7368 15400 5.6569 - - - - -
1.7598 15600 5.7439 - - - - -
1.7827 15800 5.5593 - - - - -
1.8057 16000 5.2935 - - - - -
1.8286 16200 5.088 - - - - -
1.8516 16400 5.0167 - - - - -
1.8745 16600 4.84 - - - - -
1.8974 16800 4.6731 - - - - -
1.9204 17000 4.6404 - - - - -
1.9433 17200 4.6413 - - - - -
1.9663 17400 4.4495 - - - - -
1.9892 17600 4.4262 - - - - -
2.0122 17800 2.01 - - - - -
2.0053 18000 1.8418 - - - - -
2.0282 18200 6.2714 - - - - -
2.0512 18400 6.1742 - - - - -
2.0741 18600 6.5996 - - - - -
2.0971 18800 6.0907 - - - - -
2.1200 19000 6.2418 - - - - -
2.1429 19200 5.7817 - - - - -
2.1659 19400 5.7073 - - - - -
2.1888 19600 5.2645 - - - - -
2.2118 19800 5.3451 - - - - -
2.2347 20000 5.2453 - - - - -
2.2577 20200 5.6161 - - - - -
2.2806 20400 5.2289 - - - - -
2.3035 20600 5.3888 - - - - -
2.3265 20800 5.2483 - - - - -
2.3494 21000 5.5791 - - - - -
2.3724 21200 5.1643 - - - - -
2.3953 21400 5.1231 - - - - -
2.4183 21600 5.1055 - - - - -
2.4412 21800 5.1778 - - - - -
2.4642 22000 5.0466 - - - - -
2.4871 22200 4.8321 - - - - -
2.5100 22400 4.7056 - - - - -
2.5330 22600 4.6858 - - - - -
2.5559 22800 4.9189 - - - - -
2.5789 23000 4.912 - - - - -
2.6018 23200 4.8289 - - - - -
2.6248 23400 4.8959 - - - - -
2.6477 23600 4.9441 - - - - -
2.6706 23800 4.9334 - - - - -
2.6936 24000 4.8328 - - - - -
2.7165 24200 4.601 - - - - -
2.7395 24400 4.834 - - - - -
2.7624 24600 5.152 - - - - -
2.7854 24800 4.9232 - - - - -
2.8083 25000 4.6556 - - - - -
2.8312 25200 4.6229 - - - - -
2.8542 25400 4.5768 - - - - -
2.8771 25600 4.3619 - - - - -
2.9001 25800 4.3608 - - - - -
2.9230 26000 4.2834 - - - - -
2.9403 26151 - 0.6355 0.6513 0.6703 0.6081 0.6757

Framework Versions

  • Python: 3.9.18
  • Sentence Transformers: 3.0.1
  • Transformers: 4.40.0
  • PyTorch: 2.2.2+cu121
  • Accelerate: 0.26.1
  • Datasets: 2.19.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Acknowledgments

The author would like to thank Prince Sultan University for their invaluable support in this project. Their contributions and resources have been instrumental in the development and fine-tuning of these models.

## Citation

If you use the Arabic Matryoshka Embeddings Model, please cite it as follows:

@misc{nacar2024enhancingsemanticsimilarityunderstanding,
      title={Enhancing Semantic Similarity Understanding in Arabic NLP with Nested Embedding Learning}, 
      author={Omer Nacar and Anis Koubaa},
      year={2024},
      eprint={2407.21139},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2407.21139}, 
}
Downloads last month
994
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Omartificial-Intelligence-Space/Arabic-mpnet-base-all-nli-triplet

Finetuned
(1)
this model

Dataset used to train Omartificial-Intelligence-Space/Arabic-mpnet-base-all-nli-triplet

Spaces using Omartificial-Intelligence-Space/Arabic-mpnet-base-all-nli-triplet 4

Collection including Omartificial-Intelligence-Space/Arabic-mpnet-base-all-nli-triplet

Evaluation results