Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

SentenceTransformer based on snunlp/KR-SBERT-V40K-klueNLI-augSTS

This is a sentence-transformers model finetuned from snunlp/KR-SBERT-V40K-klueNLI-augSTS. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: snunlp/KR-SBERT-V40K-klueNLI-augSTS
  • Maximum Sequence Length: 128 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("SungJoo/sbert-ft-slide-textbook-0914")
# Run inference
sentences = [
    '쿼터스트라이크의 개선 계획은 무엇인가요?',
    '기존 440 등급의 강철로 1.0mm 두께를 사용하던 것을 590 등급의 강철로 변경하고 두께를 1.8mm로 증가시키는 계획입니다.',
    '등판(Back Plate)에 가해진 힘을 측정한 결과, Y축 방향으로 1.18 kN의 힘이 측정되었고, 이로 인해 -0.120점의 감점이 있었습니다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 213,769 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 8 tokens
    • mean: 19.05 tokens
    • max: 106 tokens
    • min: 8 tokens
    • mean: 29.02 tokens
    • max: 101 tokens
  • Samples:
    sentence_0 sentence_1
    이 테스트 결과가 자동차 제조사들에게 미치는 영향은 무엇인가요? 이 테스트 결과는 자동차 제조사들이 지속적으로 차량의 안전성을 개선하도록 유도하는 역할을 합니다.
    3. 정보의 일관성: 50kph 프로토콜을 참조하도록 함으로써, 다양한 차량 모델이나 테스트 간의 머리 보호 평가 결과를 일관성 있게 비교할 수 있게 됩니다. 이는 안전성 평가의 신뢰도와 객관성을 높이는 데 기여합니다.
    복부의 상부와 하부 측면 압축 값은 각각 얼마인가요? 복부의 상부와 하부 측면 압축은 각각 11.0mm와 21.3mm입니다.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.1497 500 1.5924
0.2993 1000 0.8394
0.4490 1500 0.6154
0.5986 2000 0.5072
0.7483 2500 0.4423
0.8979 3000 0.3944
1.0476 3500 0.3535
1.1972 4000 0.3231
1.3469 4500 0.2963
1.4966 5000 0.2661
1.6462 5500 0.2425
1.7959 6000 0.2181
1.9455 6500 0.188
2.0952 7000 0.1697
2.2448 7500 0.1568
2.3945 8000 0.1472
2.5441 8500 0.1388
2.6938 9000 0.1268
2.8435 9500 0.1193
2.9931 10000 0.1002
3.1428 10500 0.097
3.2924 11000 0.0907
3.4421 11500 0.0855
3.5917 12000 0.0801
3.7414 12500 0.0748
3.8911 13000 0.0673
4.0407 13500 0.0603
4.1904 14000 0.0587
4.3400 14500 0.0557
4.4897 15000 0.0534
4.6393 15500 0.0505
4.7890 16000 0.0465
4.9386 16500 0.0424
5.0883 17000 0.0402
5.2380 17500 0.0378
5.3876 18000 0.0353
5.5373 18500 0.0356
5.6869 19000 0.0321
5.8366 19500 0.032
5.9862 20000 0.0279
6.1359 20500 0.0274
6.2855 21000 0.0271
6.4352 21500 0.025
6.5849 22000 0.025
6.7345 22500 0.0234
6.8842 23000 0.0212
7.0338 23500 0.0215
7.1835 24000 0.0198
7.3331 24500 0.0191
7.4828 25000 0.0187
7.6324 25500 0.0183
7.7821 26000 0.0173
7.9318 26500 0.0162
8.0814 27000 0.0159
8.2311 27500 0.0151
8.3807 28000 0.0146
8.5304 28500 0.015
8.6800 29000 0.0138
8.8297 29500 0.0143
8.9793 30000 0.0134
9.1290 30500 0.0127
9.2787 31000 0.0133
9.4283 31500 0.012
9.5780 32000 0.0124
9.7276 32500 0.0117
9.8773 33000 0.0116

Framework Versions

  • Python: 3.8.10
  • Sentence Transformers: 3.0.1
  • Transformers: 4.44.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.34.0
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
40
Safetensors
Model size
117M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for SungJoo/sbert-ft-slide-textbook-0914

Finetuned
this model