Edit model card

SentenceTransformer based on intfloat/multilingual-e5-small

This is a sentence-transformers model finetuned from intfloat/multilingual-e5-small on the Omartificial-Intelligence-Space/arabic-n_li-triplet dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: intfloat/multilingual-e5-small
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • Omartificial-Intelligence-Space/arabic-n_li-triplet

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Omartificial-Intelligence-Space/E5-Matro")
# Run inference
sentences = [
    'يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة.',
    'ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه',
    'الشاب نائم بينما الأم تقود ابنتها إلى الحديقة',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.7883
spearman_cosine 0.7972
pearson_manhattan 0.7846
spearman_manhattan 0.794
pearson_euclidean 0.7883
spearman_euclidean 0.7972
pearson_dot 0.7883
spearman_dot 0.7972
pearson_max 0.7883
spearman_max 0.7972

Semantic Similarity

Metric Value
pearson_cosine 0.7852
spearman_cosine 0.7968
pearson_manhattan 0.7853
spearman_manhattan 0.7936
pearson_euclidean 0.7882
spearman_euclidean 0.7963
pearson_dot 0.7786
spearman_dot 0.7868
pearson_max 0.7882
spearman_max 0.7968

Semantic Similarity

Metric Value
pearson_cosine 0.7755
spearman_cosine 0.7933
pearson_manhattan 0.7833
spearman_manhattan 0.7908
pearson_euclidean 0.7868
spearman_euclidean 0.7936
pearson_dot 0.7317
spearman_dot 0.7336
pearson_max 0.7868
spearman_max 0.7936

Semantic Similarity

Metric Value
pearson_cosine 0.7625
spearman_cosine 0.7837
pearson_manhattan 0.7753
spearman_manhattan 0.7791
pearson_euclidean 0.778
spearman_euclidean 0.7816
pearson_dot 0.6685
spearman_dot 0.6621
pearson_max 0.778
spearman_max 0.7837

Training Details

Training Dataset

Omartificial-Intelligence-Space/arabic-n_li-triplet

  • Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
  • Size: 557,850 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 10.33 tokens
    • max: 52 tokens
    • min: 5 tokens
    • mean: 13.21 tokens
    • max: 49 tokens
    • min: 5 tokens
    • mean: 15.32 tokens
    • max: 53 tokens
  • Samples:
    anchor positive negative
    شخص على حصان يقفز فوق طائرة معطلة شخص في الهواء الطلق، على حصان. شخص في مطعم، يطلب عجة.
    أطفال يبتسمون و يلوحون للكاميرا هناك أطفال حاضرون الاطفال يتجهمون
    صبي يقفز على لوح التزلج في منتصف الجسر الأحمر. الفتى يقوم بخدعة التزلج الصبي يتزلج على الرصيف
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            384,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Evaluation Dataset

Omartificial-Intelligence-Space/arabic-n_li-triplet

  • Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
  • Size: 6,584 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 21.86 tokens
    • max: 105 tokens
    • min: 4 tokens
    • mean: 10.22 tokens
    • max: 49 tokens
    • min: 4 tokens
    • mean: 11.2 tokens
    • max: 33 tokens
  • Samples:
    anchor positive negative
    امرأتان يتعانقان بينما يحملان حزمة إمرأتان يحملان حزمة الرجال يتشاجرون خارج مطعم
    طفلين صغيرين يرتديان قميصاً أزرق، أحدهما يرتدي الرقم 9 والآخر يرتدي الرقم 2 يقفان على خطوات خشبية في الحمام ويغسلان أيديهما في المغسلة. طفلين يرتديان قميصاً مرقماً يغسلون أيديهم طفلين يرتديان سترة يذهبان إلى المدرسة
    رجل يبيع الدونات لعميل خلال معرض عالمي أقيم في مدينة أنجليس رجل يبيع الدونات لعميل امرأة تشرب قهوتها في مقهى صغير
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            384,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss sts-test-128_spearman_cosine sts-test-256_spearman_cosine sts-test-384_spearman_cosine sts-test-64_spearman_cosine
0.0344 200 13.1208 - - - -
0.0688 400 9.1894 - - - -
0.1033 600 8.0222 - - - -
0.1377 800 7.2405 - - - -
0.1721 1000 7.1622 - - - -
0.2065 1200 6.4282 - - - -
0.2409 1400 6.0936 - - - -
0.2753 1600 5.99 - - - -
0.3098 1800 5.6939 - - - -
0.3442 2000 5.694 - - - -
0.3786 2200 5.2366 - - - -
0.4130 2400 5.2994 - - - -
0.4474 2600 5.2079 - - - -
0.4818 2800 5.0532 - - - -
0.5163 3000 4.9978 - - - -
0.5507 3200 5.1764 - - - -
0.5851 3400 5.1315 - - - -
0.6195 3600 5.0198 - - - -
0.6539 3800 5.0308 - - - -
0.6883 4000 5.1631 - - - -
0.7228 4200 4.7916 - - - -
0.7572 4400 4.363 - - - -
0.7916 4600 3.2357 - - - -
0.8260 4800 2.9915 - - - -
0.8604 5000 2.8143 - - - -
0.8949 5200 2.6125 - - - -
0.9293 5400 2.5493 - - - -
0.9637 5600 2.4991 - - - -
0.9981 5800 2.163 - - - -
1.0325 6000 0.0 - - - -
1.0669 6200 0.0 - - - -
1.1014 6400 0.0 - - - -
1.1358 6600 0.0 - - - -
1.1702 6800 0.0 - - - -
1.2046 7000 0.0 - - - -
1.2390 7200 0.0 - - - -
1.2734 7400 0.0 - - - -
1.3079 7600 0.0 - - - -
1.3423 7800 0.0 - - - -
1.3767 8000 0.0 - - - -
1.4111 8200 0.0037 - - - -
1.4455 8400 0.0372 - - - -
1.4800 8600 0.0221 - - - -
1.0229 8800 4.3738 - - - -
1.0573 9000 6.338 - - - -
1.0917 9200 6.2223 - - - -
1.1261 9400 5.8673 - - - -
1.1606 9600 5.5907 - - - -
1.1950 9800 5.0307 - - - -
1.2294 10000 4.9193 - - - -
1.2638 10200 4.8798 - - - -
1.2982 10400 4.401 - - - -
1.3326 10600 4.2705 - - - -
1.3671 10800 4.3023 - - - -
1.4015 11000 4.1344 - - - -
1.4359 11200 4.0464 - - - -
1.4703 11400 4.0115 - - - -
1.5047 11600 3.9206 - - - -
1.5391 11800 4.0106 - - - -
1.5736 12000 4.1365 - - - -
1.6080 12200 4.0401 - - - -
1.6424 12400 4.0602 - - - -
1.6768 12600 4.076 - - - -
1.7112 12800 3.97 - - - -
1.7457 13000 3.7905 - - - -
1.7801 13200 2.414 - - - -
1.8145 13400 2.1811 - - - -
1.8489 13600 2.1183 - - - -
1.8833 13800 2.0578 - - - -
1.9177 14000 2.0173 - - - -
1.9522 14200 2.0093 - - - -
1.9866 14400 1.9467 - - - -
2.0210 14600 0.4674 - - - -
2.0554 14800 0.0 - - - -
2.0898 15000 0.0 - - - -
2.1242 15200 0.0 - - - -
2.1587 15400 0.0 - - - -
2.1931 15600 0.0 - - - -
2.2275 15800 0.0 - - - -
2.2619 16000 0.0 - - - -
2.2963 16200 0.0 - - - -
2.3308 16400 0.0 - - - -
2.3652 16600 0.0 - - - -
2.3996 16800 0.0 - - - -
2.4340 17000 0.0 - - - -
2.4684 17200 0.0256 - - - -
2.0114 17400 2.4155 - - - -
2.0170 17433 - 0.7933 0.7968 0.7972 0.7837

Framework Versions

  • Python: 3.9.18
  • Sentence Transformers: 3.0.1
  • Transformers: 4.40.0
  • PyTorch: 2.2.2+cu121
  • Accelerate: 0.26.1
  • Datasets: 2.19.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
9
Safetensors
Model size
118M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Omartificial-Intelligence-Space/E5-all-nli-triplet-Matryoshka

Finetuned
(56)
this model

Collection including Omartificial-Intelligence-Space/E5-all-nli-triplet-Matryoshka

Evaluation results