--- base_model: sentence-transformers/all-MiniLM-L6-v2 datasets: - momo22/eng2nep language: - en - ne library_name: sentence-transformers metrics: - negative_mse - src2trg_accuracy - trg2src_accuracy - mean_accuracy pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1000 - loss:MSELoss - dataset_size:5000 - dataset_size:8000 widget: - source_sentence: 'The aggressive semi-employed religion workshop of Razzak, (EFP). ' sentences: - 'मा ग्रिटर भेट्टाउन सकेन वा GDM प्रयोगकर्ताले कार्यान्वयन गर्न सकेन ' - 'रज्जाकको आक्रामक अर्द्धशतक धर्मशाला, (एएफपी)। ' - 'त्यसैले मेरो विजयपछि म त्यस्तो अवस्था आउन दिनेछैन। ' - source_sentence: 'The authority is being a constitutional body, it was also empowered by passing the bill from Parliament. ' sentences: - 'अख्तियार संवैधानिक निकाय त हुँदै हो, त्यसमा पनि संसदबाटै विधेयक पास गरेर अख्तियारलाई अधिकारसम्पन्न पनि गराइएको थियो। ' - 'म यहूदाका राजा सिदकियाहलाई र उसका मानिसहरूलाई तिनीहरूका शत्रुहरूकहाँ सुम्पिनेछु जसले तिनीहरूलाई मार्न चाहन्छन्। ती सेनाहरू यरूशलेमबाट गइसकेका भएता पनि म तिनीहरूलाई बाबेलका राजाको सेनाहरूकहाँ सुम्पिनेछु। ' - '– संकटकालको असर न्यायिक क्षेत्रमा मात्रै पर्दैन, समग्र मुलुकमै पर्छ। ' - source_sentence: 'The two-day conference will participate in investors from China, India, Japan, the US, European countries, Britain and other countries, the Federation said. ' sentences: - 'उनीहरूको जनजीविकाको आधार प्राकृतिक स्रोत रहेको छ। ' - 'दुई दिनसम्म हुने सम्मेलनमा चीन, भारत, जापान, अमेरिका, युरोपियन देशहरू, बेलायत लगायत देशबाट लगानीकर्ताको सहभागिता गराउने महासंघले जानकारी दिएको छ ' - 'नयाँ स्न्यापसट लिनका लागि यो बटन क्लिक गर्नुहोस् । ' - source_sentence: 'Mr Sankey issued a "confession" through his solicitor after Shields had been convicted but then withdrew it. ' sentences: - 'श्री सान्कीले ढालहरू दोषी भएपछि आफ्नो समाधानकर्तामार्फत "स्वीकृति" जारी गर्नुभयो तर त्यसपछि यसलाई फिर्ता लिनुभयो। ' - 'कृत्रिम रुपमा पेट्रोलियम पदार्थको मूल्य स्थिर राख्न अनुदान दिदै जाने हो भने नेपाली अर्थतन्त्र एकदिन धराशायी हुनेछ। ' - 'ओली सरकारले "राष्ट्रियता-राष्ट्रवाद र" आर्थिक सम्ब्रिद्धि "-आर्थिक विकासलाई यसको प्राथमिकताको रूपमा घोषणा गरेको छ। ' - source_sentence: 'We want to use this time to appeal to the American government to see if they can finally close this chapter. ' sentences: - 'धेरैले घाउ पाए र ओछ्यानमा थिए। ' - 'नाम यसको अन्तरराष्ट्रिय हलको अद्वितिय डिजाइनबाट स्पष्ट रूपमा प्राप्त हुन्छ, जुन शीर्षकनियम स्पेसबाट बनेको छ, जुन ठूलो गहिराइमा उच्च दबाब बुझ्न सक्षम छ। ' - 'हामी अमेरिकी सरकारलाई अपील गर्न यसपटक प्रयोग गर्न चाहन्छौं कि उनीहरूले अन्त्यमा यो अध्याय बन्द गर्न सक्छन्। ' model-index: - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 results: - task: type: knowledge-distillation name: Knowledge Distillation dataset: name: Unknown type: unknown metrics: - type: negative_mse value: -0.37439612206071615 name: Negative Mse - task: type: translation name: Translation dataset: name: Unknown type: unknown metrics: - type: src2trg_accuracy value: 0.0186 name: Src2Trg Accuracy - type: trg2src_accuracy value: 0.00835 name: Trg2Src Accuracy - type: mean_accuracy value: 0.013474999999999999 name: Mean Accuracy --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the [momo22/eng2nep](https://huggingface.co/datasets/momo22/eng2nep) dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [momo22/eng2nep](https://huggingface.co/datasets/momo22/eng2nep) - **Languages:** en, ne ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("jangedoo/all-MiniLM-L6-v2-nepali") # Run inference sentences = [ 'We want to use this time to appeal to the American government to see if they can finally close this chapter.\n', 'हामी अमेरिकी सरकारलाई अपील गर्न यसपटक प्रयोग गर्न चाहन्छौं कि उनीहरूले अन्त्यमा यो अध्याय बन्द गर्न सक्छन्।\n', 'नाम यसको अन्तरराष्ट्रिय हलको अद्वितिय डिजाइनबाट स्पष्ट रूपमा प्राप्त हुन्छ, जुन शीर्षकनियम स्पेसबाट बनेको छ, जुन ठूलो गहिराइमा उच्च दबाब बुझ्न सक्षम छ।\n', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Evaluation ### Metrics #### Knowledge Distillation * Evaluated with [MSEEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.MSEEvaluator) | Metric | Value | |:-----------------|:------------| | **negative_mse** | **-0.3744** | #### Translation * Evaluated with [TranslationEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TranslationEvaluator) | Metric | Value | |:------------------|:-----------| | src2trg_accuracy | 0.0186 | | trg2src_accuracy | 0.0083 | | **mean_accuracy** | **0.0135** | ## Training Details ### Training Dataset #### momo22/eng2nep * Dataset: [momo22/eng2nep](https://huggingface.co/datasets/momo22/eng2nep) at [57da8d4](https://huggingface.co/datasets/momo22/eng2nep/tree/57da8d44266896e334c1d8f2528cbbf666fbd0ca) * Size: 8,000 training samples * Columns: English, Nepali, and label * Approximate statistics based on the first 1000 samples: | | English | Nepali | label | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-------------------------------------| | type | string | string | list | | details | | | | * Samples: | English | Nepali | label | |:------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------| | But with the origin of feudal practices in the Middle Ages, the practice of untouchability began, as well as discrimination against women.
| तर मध्ययुगमा सामन्ती प्रथाको उद्भव भएसँगै जसरी छुवाछुत प्रथाको शुरुवात भयो, त्यसैगरी नारी प्रति पनि विभेद गरिन थालियो
| [-0.05432726442813873, 0.029996933415532112, -0.008532932959496975, -0.035200122743844986, 0.008856767788529396, ...] | | A Pandit was found on the way to Pokhara from Baglung.
| वाग्लुङ्गबाट पोखरा आउँदा बाटोमा एकजना पण्डित भेटिए।
| [-0.023763148114085197, 0.0959007516503334, -0.11197677254676819, 0.10978179425001144, -0.028137238696217537, ...] | | He went on: "She ate a perfectly normal and healthy diet.
| उनी गए: "उनले पूर्ण सामान्य र स्वस्थ आहार खाइन्।
| [0.028130479156970978, 0.030386686325073242, -0.012276170775294304, 0.1316223442554474, -0.01928202621638775, ...] | * Loss: [MSELoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss) ### Evaluation Dataset #### momo22/eng2nep * Dataset: [momo22/eng2nep](https://huggingface.co/datasets/momo22/eng2nep) at [57da8d4](https://huggingface.co/datasets/momo22/eng2nep/tree/57da8d44266896e334c1d8f2528cbbf666fbd0ca) * Size: 500 evaluation samples * Columns: English, Nepali, and label * Approximate statistics based on the first 1000 samples: | | English | Nepali | label | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------| | type | string | string | list | | details | | | | * Samples: | English | Nepali | label | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------| | Chapter 3
| परिच्छेद–३
| [-0.049459926784038544, 0.048675183206796646, 0.016583453863859177, 0.04876156523823738, -0.020754676312208176, ...] | | The capability of MOF would be strengthened to enable it to efficiently play the lead role in donor coordination, and to secure support from all stakeholders in aid coordination activities.
| दाताहरूको समन्वयमा नेतृत्वदायीको भूमिका निर्वाह प्रभावकारी ढंगले गर्न अर्थ मन्त्रालयको क्षमता सुदृढ गरिनेछ यसको लागि सबै सरोकारवालाबाट समर्थन प्राप्त गरिनेछ ।
| [-0.06200315058231354, -0.016507938504219055, -0.029924314469099045, -0.052509162575006485, 0.07746178656816483, ...] | | Polimatrix, Inc. is a system integrator and total solutions provider delivering radiation and nuclear protection and detection.
| पोलिमाट्रिक्स, इन्कर्पोरेटिड प्रणाली इन्टिजर र कुल समाधान प्रदायक रेडियो र आणविक संरक्षण र पत्ता लगाउने प्रणाली इन्टिजर र कुल समाधान प्रदायक हो।
| [-0.0446796678006649, 0.026428330689668655, -0.09837698936462402, -0.07765442878007889, -0.020364686846733093, ...] | * Loss: [MSELoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss) ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `learning_rate`: 2e-05 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `bf16`: True - `push_to_hub`: True - `hub_model_id`: jangedoo/all-MiniLM-L6-v2-nepali - `push_to_hub_model_id`: all-MiniLM-L6-v2-nepali #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: True - `resume_from_checkpoint`: None - `hub_model_id`: jangedoo/all-MiniLM-L6-v2-nepali - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: all-MiniLM-L6-v2-nepali - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional
### Training Logs | Epoch | Step | Training Loss | loss | mean_accuracy | negative_mse | |:-----:|:----:|:-------------:|:------:|:-------------:|:------------:| | 0.4 | 50 | 0.0021 | 0.0019 | 0.0111 | -0.3837 | | 0.8 | 100 | 0.002 | 0.0019 | 0.0123 | -0.3794 | | 0.4 | 50 | 0.002 | 0.0019 | 0.0130 | -0.3773 | | 0.8 | 100 | 0.002 | 0.0019 | 0.0135 | -0.3744 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.42.4 - PyTorch: 2.3.1+cu121 - Accelerate: 0.32.1 - Datasets: 2.21.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MSELoss ```bibtex @inproceedings{reimers-2020-multilingual-sentence-bert, title = "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2020", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2004.09813", } ```