metadata
language: []
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dataset_size:100K<n<1M
- loss:CoSENTLoss
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
base_model: distilbert/distilbert-base-uncased
widget:
- source_sentence: T L 2 DUMMY CHEST LAT WIDEBAND 90 Deg Front 2020 CX482 G-S
sentences:
- T L F DUMMY CHEST LAT WIDEBAND 90 Deg Front 2020.5 U625 G-S
- T L F DUMMY HEAD CG LAT WIDEBAND Static Airbag OOP Test 2025 CX430 G-S
- >-
T R F DUMMY PELVIS LAT WIDEBAND 90 Deg Frontal Impact Simulation 2026
P800 G-S
- source_sentence: T L F DUMMY CHEST LONG WIDEBAND 90 Deg Front 2022 U553 G-S
sentences:
- T R F TORSO BELT AT D RING LOAD WIDEBAND 90 Deg Front 2022 U553 LBF
- T L F DUMMY L UP TIBIA MY LOAD WIDEBAND 90 Deg Front 2015 P552 IN-LBS
- >-
T R F DUMMY R UP TIBIA FX LOAD WIDEBAND 30 Deg Front Angular Left 2022
U554 LBF
- source_sentence: T R F DUMMY PELVIS LAT WIDEBAND 90 Deg Front 2019 D544 G-S
sentences:
- T L F DUMMY PELVIS LAT WIDEBAND 90 Deg Front 2015 P552 G-S
- T L LOWER CONTROL ARM VERT WIDEBAND Left Side Drop Test 2024.5 P702 G-S
- F BARRIER PLATE 11030 SZ D FX LOAD WIDEBAND 90 Deg Front 2015 P552 LBF
- source_sentence: T ENGINE ENGINE TOP LAT WIDEBAND 90 Deg Front 2015 P552 G-S
sentences:
- T R ENGINE TRANS BOTTOM LAT WIDEBAND 90 Deg Front 2015 P552 G-S
- F BARRIER PLATE 09030 SZ D FX LOAD WIDEBAND 90 Deg Front 2015 P552 LBF
- T R F DUMMY NECK UPPER MX LOAD WIDEBAND 90 Deg Front 2022 U554 IN-LBS
- source_sentence: T L F DUMMY CHEST LAT WIDEBAND 90 Deg Front 2020 CX482 G-S
sentences:
- T R F DUMMY CHEST LAT WIDEBAND 90 Deg Front 2025 V363N G-S
- T R F DUMMY HEAD CG VERT WIDEBAND VIA Linear Impact Test 2021 C727 G-S
- >-
T L F DUMMY T1 VERT WIDEBAND 75 Deg Oblique Left Side 10 in. Pole 2026
P800 G-S
pipeline_tag: sentence-similarity
model-index:
- name: SentenceTransformer based on distilbert/distilbert-base-uncased
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: 0.27051173706186693
name: Pearson Cosine
- type: spearman_cosine
value: 0.2798593637893599
name: Spearman Cosine
- type: pearson_manhattan
value: 0.228702027931258
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.25353345676390787
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.23018017587211453
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.2550481010151111
name: Spearman Euclidean
- type: pearson_dot
value: 0.2125353301405465
name: Pearson Dot
- type: spearman_dot
value: 0.1902748420981738
name: Spearman Dot
- type: pearson_max
value: 0.27051173706186693
name: Pearson Max
- type: spearman_max
value: 0.2798593637893599
name: Spearman Max
- type: pearson_cosine
value: 0.26319176781258086
name: Pearson Cosine
- type: spearman_cosine
value: 0.2721909587247752
name: Spearman Cosine
- type: pearson_manhattan
value: 0.21766215319708615
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.2439514548051345
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.2195389492634635
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.24629153092425862
name: Spearman Euclidean
- type: pearson_dot
value: 0.21073878591545503
name: Pearson Dot
- type: spearman_dot
value: 0.1864889259868287
name: Spearman Dot
- type: pearson_max
value: 0.26319176781258086
name: Pearson Max
- type: spearman_max
value: 0.2721909587247752
name: Spearman Max
SentenceTransformer based on distilbert/distilbert-base-uncased
This is a sentence-transformers model finetuned from distilbert/distilbert-base-uncased. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: distilbert/distilbert-base-uncased
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'T L F DUMMY CHEST LAT WIDEBAND 90 Deg Front 2020 CX482 G-S',
'T R F DUMMY CHEST LAT WIDEBAND 90 Deg Front 2025 V363N G-S',
'T R F DUMMY HEAD CG VERT WIDEBAND VIA Linear Impact Test 2021 C727 G-S',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Semantic Similarity
- Dataset:
sts-dev
- Evaluated with
EmbeddingSimilarityEvaluator
Metric | Value |
---|---|
pearson_cosine | 0.2705 |
spearman_cosine | 0.2799 |
pearson_manhattan | 0.2287 |
spearman_manhattan | 0.2535 |
pearson_euclidean | 0.2302 |
spearman_euclidean | 0.255 |
pearson_dot | 0.2125 |
spearman_dot | 0.1903 |
pearson_max | 0.2705 |
spearman_max | 0.2799 |
Semantic Similarity
- Dataset:
sts-dev
- Evaluated with
EmbeddingSimilarityEvaluator
Metric | Value |
---|---|
pearson_cosine | 0.2632 |
spearman_cosine | 0.2722 |
pearson_manhattan | 0.2177 |
spearman_manhattan | 0.244 |
pearson_euclidean | 0.2195 |
spearman_euclidean | 0.2463 |
pearson_dot | 0.2107 |
spearman_dot | 0.1865 |
pearson_max | 0.2632 |
spearman_max | 0.2722 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 481,114 training samples
- Columns:
sentence1
,sentence2
, andscore
- Approximate statistics based on the first 1000 samples:
sentence1 sentence2 score type string string float details - min: 16 tokens
- mean: 32.14 tokens
- max: 57 tokens
- min: 17 tokens
- mean: 32.62 tokens
- max: 58 tokens
- min: 0.0
- mean: 0.45
- max: 1.0
- Samples:
sentence1 sentence2 score T L C PLR SM SCS L2 HY REF 053 LAT WIDEBAND 75 Deg Oblique Left Side 10 in. Pole 2018 P558 G-S
T PCM PWR POWER TO PCM VOLT 2 SEC WIDEBAND 75 Deg Oblique Left Side 10 in. Pole 2020 V363N VOLTS
0.5198143220305642
T L F DUMMY L_FEMUR MX LOAD WIDEBAND 90 Deg Frontal Impact Simulation MY2025 U717 IN-LBS
B L FRAME AT No 1 X MEM LAT WIDEBAND Inline 25% Left Front Offset Vehicle to Vehicle 2021 P702 G-S
0.5214072221695696
T R F DOOR REAR OF SEAT H PT LAT WIDEBAND 75 Deg Oblique Right Side 10 in. Pole 2015 P552 G-S
T SCS R2 HY BOS A12 008 TAP RIGHT C PILLAR VOLT WIDEBAND 30 Deg Front Angular Right 2021 CX727 VOLTS
0.322173496575591
- Loss:
CoSENTLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Evaluation Dataset
Unnamed Dataset
- Size: 103,097 evaluation samples
- Columns:
sentence1
,sentence2
, andscore
- Approximate statistics based on the first 1000 samples:
sentence1 sentence2 score type string string float details - min: 17 tokens
- mean: 31.98 tokens
- max: 56 tokens
- min: 15 tokens
- mean: 31.96 tokens
- max: 58 tokens
- min: 0.0
- mean: 0.45
- max: 1.0
- Samples:
sentence1 sentence2 score T R F DUMMY NECK UPPER MZ LOAD WIDEBAND 90 Deg Frontal Impact Simulation 2026 GENERIC IN-LBS
T R ROCKER AT C PILLAR LAT WIDEBAND 90 Deg Front 2021 P702 G-S
0.5234504780172093
T L ROCKER AT B_PILLAR VERT WIDEBAND 90 Deg Front 2024.5 P702 G-S
T RCM BTWN SEATS LOW G Z RCM C1 LZ ALV RC7 003 VOLT WIDEBAND 75 Deg Oblique Left Side 10 in. Pole 2018 P558 VOLTS
0.36805699821563936
T R FRAME AT C_PILLAR LONG WIDEBAND 90 Deg Left Side IIHS MDB to Vehicle 2024.5 P702 G-S
T L F LAP BELT AT ANCHOR LOAD WIDEBAND 90 DEG / LEFT SIDE DECEL-3G 2021 P702 LBF
0.5309750606095435
- Loss:
CoSENTLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size
: 64per_device_eval_batch_size
: 64num_train_epochs
: 32warmup_ratio
: 0.1fp16
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseprediction_loss_only
: Trueper_device_train_batch_size
: 64per_device_eval_batch_size
: 64per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 32max_steps
: -1lr_scheduler_type
: linearwarmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 7ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Truedataloader_num_workers
: 0past_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Nonedeepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Trueskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falsefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Falseinclude_tokens_per_second
: Falseneftune_noise_alpha
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | loss | sts-dev_spearman_cosine |
---|---|---|---|---|
1.0650 | 1000 | 7.6111 | 7.5503 | 0.4087 |
2.1299 | 2000 | 7.5359 | 7.5420 | 0.4448 |
3.1949 | 3000 | 7.5232 | 7.5292 | 0.4622 |
4.2599 | 4000 | 7.5146 | 7.5218 | 0.4779 |
5.3248 | 5000 | 7.5045 | 7.5200 | 0.4880 |
6.3898 | 6000 | 7.4956 | 7.5191 | 0.4934 |
7.4547 | 7000 | 7.4873 | 7.5170 | 0.4967 |
8.5197 | 8000 | 7.4781 | 7.5218 | 0.4931 |
9.5847 | 9000 | 7.4686 | 7.5257 | 0.4961 |
10.6496 | 10000 | 7.4596 | 7.5327 | 0.4884 |
11.7146 | 11000 | 7.4498 | 7.5403 | 0.4860 |
12.7796 | 12000 | 7.4386 | 7.5507 | 0.4735 |
13.8445 | 13000 | 7.4253 | 7.5651 | 0.4660 |
14.9095 | 14000 | 7.4124 | 7.5927 | 0.4467 |
15.9744 | 15000 | 7.3989 | 7.6054 | 0.4314 |
17.0394 | 16000 | 7.3833 | 7.6654 | 0.4163 |
18.1044 | 17000 | 7.3669 | 7.7186 | 0.3967 |
19.1693 | 18000 | 7.3519 | 7.7653 | 0.3779 |
20.2343 | 19000 | 7.3349 | 7.8356 | 0.3651 |
21.2993 | 20000 | 7.3191 | 7.8772 | 0.3495 |
22.3642 | 21000 | 7.3032 | 7.9346 | 0.3412 |
23.4292 | 22000 | 7.2873 | 7.9624 | 0.3231 |
24.4941 | 23000 | 7.2718 | 8.0169 | 0.3161 |
25.5591 | 24000 | 7.2556 | 8.0633 | 0.3050 |
26.6241 | 25000 | 7.2425 | 8.1021 | 0.2958 |
27.6890 | 26000 | 7.2278 | 8.1563 | 0.2954 |
28.7540 | 27000 | 7.2124 | 8.1955 | 0.2882 |
29.8190 | 28000 | 7.2014 | 8.2234 | 0.2821 |
30.8839 | 29000 | 7.1938 | 8.2447 | 0.2792 |
31.9489 | 30000 | 7.1811 | 8.2609 | 0.2799 |
32.0 | 30048 | - | - | 0.2722 |
Framework Versions
- Python: 3.10.6
- Sentence Transformers: 3.0.0
- Transformers: 4.35.0
- PyTorch: 2.1.0a0+4136153
- Accelerate: 0.30.1
- Datasets: 2.14.1
- Tokenizers: 0.14.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
CoSENTLoss
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}