--- pipeline_tag: sentence-similarity license: cc-by-4.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers language: - multilingual - en - hi - mr - kn - ta - te - ml - gu - or - pa - bn widget: - source_sentence: दिवाळी आपण मोठ्या उत्साहाने साजरी करतो sentences: - दिवाळी आपण आनंदाने साजरी करतो - दिवाळी हा दिव्यांचा सण आहे example_title: Monolingual- Marathi - source_sentence: हम दीपावली उत्साह के साथ मनाते हैं sentences: - हम दीपावली खुशियों से मनाते हैं - दिवाली रोशनी का त्योहार है example_title: Monolingual- Hindi - source_sentence: અમે ઉત્સાહથી દિવાળી ઉજવીએ છીએ sentences: - દિવાળી આપણે ખુશીઓથી ઉજવીએ છીએ - દિવાળી એ રોશનીનો તહેવાર છે example_title: Monolingual- Gujarati - source_sentence: आम्हाला भारतीय असल्याचा अभिमान आहे sentences: - हमें भारतीय होने पर गर्व है - భారతీయులమైనందుకు గర్విస్తున్నాం - અમને ભારતીય હોવાનો ગર્વ છે example_title: Cross-lingual 1 - source_sentence: ਬਾਰਿਸ਼ ਤੋਂ ਬਾਅਦ ਬਗੀਚਾ ਸੁੰਦਰ ਦਿਖਾਈ ਦਿੰਦਾ ਹੈ sentences: - മഴയ്ക്ക് ശേഷം പൂന്തോട്ടം മനോഹരമായി കാണപ്പെടുന്നു - ବର୍ଷା ପରେ ବଗିଚା ସୁନ୍ଦର ଦେଖାଯାଏ | - बारिश के बाद बगीचा सुंदर दिखता है example_title: Cross-lingual 2 --- # IndicSBERT This is a MuRIL model (google/muril-base-cased) trained on the NLI dataset of ten major Indian Languages.
The single model works for English, Hindi, Marathi, Kannada, Tamil, Telugu, Gujarati, Oriya, Punjabi, Malayalam, and Bengali. The model also has cross-lingual capabilities.
Released as a part of project MahaNLP: https://github.com/l3cube-pune/MarathiNLP
A better sentence similarity model (fine-tuned version of this model) is shared here: https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2211.11187) ``` @article{joshi2022l3cubemahasbert, title={L3Cube-MahaSBERT and HindSBERT: Sentence BERT Models and Benchmarking BERT Sentence Representations for Hindi and Marathi}, author={Joshi, Ananya and Kajale, Aditi and Gadre, Janhavi and Deode, Samruddhi and Joshi, Raviraj}, journal={arXiv preprint arXiv:2211.11187}, year={2022} } ``` ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ```