krsnaman commited on
Commit
ed535e9
1 Parent(s): b712415

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -94
README.md CHANGED
@@ -1,148 +1,102 @@
1
  ---
2
- languages:
 
 
 
 
 
 
 
3
  - as
4
  - bn
5
- - gu
6
  - hi
7
  - kn
8
  - ml
9
- - mr
10
  - or
11
  - pa
12
  - ta
13
  - te
14
- tags:
15
- - multilingual
16
- - nlp
17
- - indicnlp
18
- ---
19
 
20
- IndicBART is a multilingual, sequence-to-sequence pre-trained model focusing on Indic languages and English. It currently supports 11 Indian languages and is based on the mBART architecture. You can use IndicBART model to build natural language generation applications for Indian languages by finetuning the model with supervised training data for tasks like machine translation, summarization, question generation, etc. Some salient features of the IndicBART are:
21
 
22
- <ul>
23
- <li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, Telugu and English. Not all of these languages are supported by mBART50 and mT5. </li>
24
- <li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding. </li>
25
- <li> Trained on large Indic language corpora (452 million sentences and 9 billion tokens) which also includes Indian English content. </li>
26
- <li> All languages, except English, have been represented in Devanagari script to encourage transfer learning among the related languages. </li>
27
- </ul>
28
-
29
- You can read more about IndicBART in this <a href="https://arxiv.org/abs/2109.02903">paper</a>.
30
 
31
- For detailed documentation, look here: https://github.com/AI4Bharat/indic-bart/ and https://indicnlp.ai4bharat.org/indic-bart/
32
 
33
- # Pre-training corpus
 
34
 
35
- We used the <a href="https://indicnlp.ai4bharat.org/corpora/">IndicCorp</a> data spanning 12 languages with 452 million sentences (9 billion tokens). The model was trained using the text-infilling objective used in mBART.
36
 
37
- # Usage:
38
 
39
  ```
40
  from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
41
  from transformers import AlbertTokenizer, AutoTokenizer
42
-
43
- tokenizer = AutoTokenizer.from_pretrained("ai4bharat/IndicBART", do_lower_case=False, use_fast=False, keep_accents=True)
44
-
45
- # Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/IndicBART", do_lower_case=False, use_fast=False, keep_accents=True)
46
-
47
- model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/IndicBART")
48
-
49
- # Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/IndicBART")
50
-
51
  # Some initial mapping
52
  bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
53
  eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
54
  pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
55
  # To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
56
-
57
  # First tokenize the input and outputs. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
58
  inp = tokenizer("I am a boy </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[ 466, 1981, 80, 25573, 64001, 64004]])
59
 
60
- out = tokenizer("<2hi> मैं एक लड़का हूँ </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[64006, 942, 43, 32720, 8384, 64001]])
61
- # Note that if you use any language other than Hindi or Marathi, you should convert its script to Devanagari using the Indic NLP Library.
62
-
63
- model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])
64
-
65
- # For loss
66
- model_outputs.loss ## This is not label smoothed.
67
-
68
- # For logits
69
- model_outputs.logits
70
-
71
  # For generation. Pardon the messiness. Note the decoder_start_token_id.
72
-
73
  model.eval() # Set dropouts to zero
74
-
75
- model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
76
-
77
-
78
  # Decode to get output strings
79
-
80
  decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
81
-
82
  print(decoded_output) # I am a boy
83
  # Note that if your output language is not Hindi or Marathi, you should convert its script from Devanagari to the desired language using the Indic NLP Library.
84
-
85
  # What if we mask?
86
-
87
  inp = tokenizer("I am [MASK] </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
88
-
89
- model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
90
-
91
  decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
92
-
93
  print(decoded_output) # I am happy
94
-
95
  inp = tokenizer("मैं [MASK] हूँ </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
96
-
97
- model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
98
-
99
  decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
100
-
101
  print(decoded_output) # मैं जानता हूँ
102
-
103
  inp = tokenizer("मला [MASK] पाहिजे </s> <2mr>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
104
-
105
- model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
106
-
107
  decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
108
-
109
  print(decoded_output) # मला ओळखलं पाहिजे
110
-
111
  ```
 
 
 
 
112
 
113
- # Notes:
114
- 1. This is compatible with the latest version of transformers but was developed with version 4.3.2 so consider using 4.3.2 if possible.
115
- 2. While I have only shown how to get logits and loss and how to generate outputs, you can do pretty much everything the MBartForConditionalGeneration class can do as in https://huggingface.co/docs/transformers/model_doc/mbart#transformers.MBartForConditionalGeneration
116
- 3. Note that the tokenizer I have used is based on sentencepiece and not BPE. Therefore, I used the AlbertTokenizer class and not the MBartTokenizer class.
117
- 4. If you wish to use any language written in a non-Devanagari script (except English), then you should first convert it to Devanagari using the <a href="https://github.com/anoopkunchukuttan/indic_nlp_library">Indic NLP Library</a>. After you get the output, you should convert it back into the original script.
118
 
119
- # Fine-tuning on a downstream task
 
 
 
 
 
 
 
 
 
 
120
 
121
- 1. If you wish to fine-tune this model, then you can do so using the <a href="https://github.com/prajdabre/yanmtt">YANMTT</a> toolkit, following the instructions <a href="https://github.com/AI4Bharat/indic-bart ">here</a>.
122
- 2. (Untested) Alternatively, you may use the official huggingface scripts for <a href="https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation">translation</a> and <a href="https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization">summarization</a>.
123
 
124
- # Contributors
125
- <ul>
126
- <li> Raj Dabre </li>
127
- <li> Himani Shrotriya </li>
128
- <li> Anoop Kunchukuttan </li>
129
- <li> Ratish Puduppully </li>
130
- <li> Mitesh M. Khapra </li>
131
- <li> Pratyush Kumar </li>
132
- </ul>
133
 
134
- # Paper
135
- If you use IndicBART, please cite the following paper:
 
136
  ```
137
- @misc{dabre2021indicbart,
138
- title={IndicBART: A Pre-trained Model for Natural Language Generation of Indic Languages},
139
- author={Raj Dabre and Himani Shrotriya and Anoop Kunchukuttan and Ratish Puduppully and Mitesh M. Khapra and Pratyush Kumar},
140
- year={2021},
141
- eprint={2109.02903},
142
- archivePrefix={arXiv},
143
- primaryClass={cs.CL}
144
- }
145
  ```
146
-
147
  # License
148
  The model is available under the MIT License.
 
1
  ---
2
+ tags:
3
+ - wikibio
4
+ - multilingual
5
+ - nlp
6
+ - indicnlp
7
+ datasets:
8
+ - ai4bharat/IndicWikiBio
9
+ language:
10
  - as
11
  - bn
 
12
  - hi
13
  - kn
14
  - ml
 
15
  - or
16
  - pa
17
  - ta
18
  - te
19
+ licenses:
20
+ - cc-by-nc-4.0
 
 
 
21
 
 
22
 
23
+ ---
 
 
 
 
 
 
 
24
 
25
+ # MultiIndicWikiBioUnified
26
 
27
+ This repository contains the [IndicBART](https://huggingface.co/ai4bharat/IndicBART) checkpoint finetuned on the 9 languages of [IndicWikiBio](https://huggingface.co/datasets/ai4bharat/IndicWikiBio) dataset. For finetuning details,
28
+ see the [paper](https://arxiv.org/abs/2203.05437).
29
 
 
30
 
31
+ ## Using this model in `transformers`
32
 
33
  ```
34
  from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
35
  from transformers import AlbertTokenizer, AutoTokenizer
36
+ tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration", do_lower_case=False, use_fast=False, keep_accents=True)
37
+ # Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration", do_lower_case=False, use_fast=False, keep_accents=True)
38
+ model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration")
39
+ # Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration")
 
 
 
 
 
40
  # Some initial mapping
41
  bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
42
  eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
43
  pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
44
  # To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
 
45
  # First tokenize the input and outputs. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
46
  inp = tokenizer("I am a boy </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[ 466, 1981, 80, 25573, 64001, 64004]])
47
 
 
 
 
 
 
 
 
 
 
 
 
48
  # For generation. Pardon the messiness. Note the decoder_start_token_id.
 
49
  model.eval() # Set dropouts to zero
50
+ model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
 
 
 
51
  # Decode to get output strings
 
52
  decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
 
53
  print(decoded_output) # I am a boy
54
  # Note that if your output language is not Hindi or Marathi, you should convert its script from Devanagari to the desired language using the Indic NLP Library.
 
55
  # What if we mask?
 
56
  inp = tokenizer("I am [MASK] </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
57
+ model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
 
 
58
  decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
 
59
  print(decoded_output) # I am happy
 
60
  inp = tokenizer("मैं [MASK] हूँ </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
61
+ model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
 
 
62
  decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
 
63
  print(decoded_output) # मैं जानता हूँ
 
64
  inp = tokenizer("मला [MASK] पाहिजे </s> <2mr>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
65
+ model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
 
 
66
  decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
 
67
  print(decoded_output) # मला ओळखलं पाहिजे
 
68
  ```
69
+ # Note:
70
+ If you wish to use any language written in a non-Devanagari script, then you should first convert it to Devanagari using the <a href="https://github.com/anoopkunchukuttan/indic_nlp_library">Indic NLP Library</a>. After you get the output, you should convert it back into the original script.
71
+
72
+ ## Benchmarks
73
 
74
+ Scores on the `IndicWikiBio` test sets are as follows:
 
 
 
 
75
 
76
+ Language | RougeL
77
+ ---------|----------------------------
78
+ as | 56.28
79
+ bn | 57.42
80
+ hi | 67.48
81
+ kn | 40.01
82
+ ml | 38.84
83
+ or | 67.13
84
+ pa | 52.88
85
+ ta | 51.82
86
+ te | 51.43
87
 
 
 
88
 
 
 
 
 
 
 
 
 
 
89
 
90
+ ## Citation
91
+
92
+ If you use this model, please cite the following paper:
93
  ```
94
+ @inproceedings{Kumar2022IndicNLGSM,
95
+ title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
96
+ author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
97
+ year={2022},
98
+ url = "https://arxiv.org/abs/2203.05437"
99
+ }
 
 
100
  ```
 
101
  # License
102
  The model is available under the MIT License.