krsnaman commited on
Commit
6b27e19
1 Parent(s): 496846e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -93
README.md CHANGED
@@ -1,125 +1,104 @@
1
- MultiIndicQuestionGenerationSS is a multilingual, sequence-to-sequence pre-trained model focusing on Indic languages. It currently supports 11 Indian languages and is based on the mBART architecture. You can use MultiIndicQuestionGenerationSS model to build question generation applications for Indian languages by finetuning the model with supervised training data. Some salient features of the MultiIndicQuestionGenerationSS are:
2
-
3
- <ul>
4
- <li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odia, Punjabi, Kannada, Malayalam, Tamil, and Telugu. </li>
5
- <li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding. </li>
6
- <li> Trained on large Indic language corpora (452 million sentences and 9 billion tokens) which also includes Indian English content. </li>
7
- <li> Unlike ai4bharat/IndicBART each language is written in its own script so you do not need to perform any script mapping to/from Devanagari. </li>
8
- </ul>
9
-
10
- You can read more about IndicBARTSS in this <a href="https://arxiv.org/abs/2109.02903">paper</a>.
11
-
12
- For detailed documentation, look here: https://github.com/AI4Bharat/indic-bart/ and https://indicnlp.ai4bharat.org/indic-bart/
13
-
14
- # Pre-training corpus
15
-
16
- We used the <a href="https://indicnlp.ai4bharat.org/corpora/">IndicCorp</a> data spanning 12 languages with 452 million sentences (9 billion tokens). The model was trained using the text-infilling objective used in mBART.
17
-
18
- # Usage:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  ```
21
  from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
22
  from transformers import AlbertTokenizer, AutoTokenizer
23
-
24
- tokenizer = AutoTokenizer.from_pretrained("ai4bharat/IndicBARTSS", do_lower_case=False, use_fast=False, keep_accents=True)
25
-
26
- # Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/IndicBARTSS", do_lower_case=False, use_fast=False, keep_accents=True)
27
-
28
- model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/IndicBARTSS")
29
-
30
- # Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/IndicBARTSS")
31
-
32
  # Some initial mapping
33
  bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
34
  eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
35
  pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
36
  # To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
37
-
38
- # First tokenize the input and outputs. The format below is how IndicBARTSS was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
39
  inp = tokenizer("I am a boy </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[ 466, 1981, 80, 25573, 64001, 64004]])
40
 
41
- out = tokenizer("<2hi> मैं एक लड़का हूँ </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[64006, 942, 43, 32720, 8384, 64001]])
42
-
43
- model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])
44
-
45
- # For loss
46
- model_outputs.loss ## This is not label smoothed.
47
-
48
- # For logits
49
- model_outputs.logits
50
-
51
  # For generation. Pardon the messiness. Note the decoder_start_token_id.
52
-
53
  model.eval() # Set dropouts to zero
54
-
55
- model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
56
-
57
-
58
  # Decode to get output strings
59
-
60
  decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
61
-
62
  print(decoded_output) # I am a boy
63
-
64
  # What if we mask?
65
-
66
  inp = tokenizer("I am [MASK] </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
67
-
68
- model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
69
-
70
  decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
71
-
72
  print(decoded_output) # I am happy
73
-
74
- inp = tokenizer("मैं [MASK] हूँ </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
75
-
76
- model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
77
-
78
  decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
79
-
80
- print(decoded_output) # मैं जानता हूँ
81
-
82
- inp = tokenizer("मला [MASK] पाहिजे </s> <2mr>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
83
-
84
- model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
85
-
86
  decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
 
 
87
 
88
- print(decoded_output) # मला ओळखलं पाहिजे
89
 
90
- ```
91
 
92
- # Notes:
93
- 1. This is compatible with the latest version of transformers but was developed with version 4.3.2 so consider using 4.3.2 if possible.
94
- 2. While I have only shown how to get logits and loss and how to generate outputs, you can do pretty much everything the MBartForConditionalGeneration class can do as in https://huggingface.co/docs/transformers/model_doc/mbart#transformers.MBartForConditionalGeneration
95
- 3. Note that the tokenizer I have used is based on sentencepiece and not BPE. Therefore, I used the AlbertTokenizer class and not the MBartTokenizer class.
96
- # Fine-tuning on a downstream task
 
 
 
 
 
 
 
 
97
 
98
- 1. If you wish to fine-tune this model, then you can do so using the <a href="https://github.com/prajdabre/yanmtt">YANMTT</a> toolkit, following the instructions <a href="https://github.com/AI4Bharat/indic-bart ">here</a>.
99
- 2. (Untested) Alternatively, you may use the official huggingface scripts for <a href="https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation">translation</a> and <a href="https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization">summarization</a>.
100
 
101
- # Contributors
102
- <ul>
103
- <li> Raj Dabre </li>
104
- <li> Himani Shrotriya </li>
105
- <li> Anoop Kunchukuttan </li>
106
- <li> Ratish Puduppully </li>
107
- <li> Mitesh M. Khapra </li>
108
- <li> Pratyush Kumar </li>
109
- </ul>
110
 
111
- # Paper
112
- If you use IndicBARTSS, please cite the following paper:
 
113
  ```
114
- @misc{dabre2021indicbart,
115
- title={IndicBART: A Pre-trained Model for Natural Language Generation of Indic Languages},
116
- author={Raj Dabre and Himani Shrotriya and Anoop Kunchukuttan and Ratish Puduppully and Mitesh M. Khapra and Pratyush Kumar},
117
- year={2021},
118
- eprint={2109.02903},
119
- archivePrefix={arXiv},
120
- primaryClass={cs.CL}
121
- }
122
  ```
123
-
124
  # License
125
  The model is available under the MIT License.
 
1
+ ---
2
+ tags:
3
+ - question-generation
4
+ - multilingual
5
+ - nlp
6
+ - indicnlp
7
+ datasets:
8
+ - ai4bharat/IndicQuestionGeneration
9
+ language:
10
+ - as
11
+ - bn
12
+ - gu
13
+ - hi
14
+ - kn
15
+ - ml
16
+ - mr
17
+ - or
18
+ - pa
19
+ - ta
20
+ - te
21
+ licenses:
22
+ - cc-by-nc-4.0
23
+
24
+
25
+ ---
26
+
27
+ # MultiIndicParaphraseGenerationSS
28
+
29
+ This repository contains the [IndicBARTSS](https://huggingface.co/ai4bharat/IndicBARTSS) checkpoint finetuned on the 11 languages of [IndicQuestionGeneration](https://huggingface.co/datasets/ai4bharat/IndicQuestionGeneration) dataset. For finetuning details,
30
+ see the [paper](https://arxiv.org/abs/2203.05437).
31
+
32
+
33
+ ## Using this model in `transformers`
34
 
35
  ```
36
  from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
37
  from transformers import AlbertTokenizer, AutoTokenizer
38
+ tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicParaphraseGenerationSS", do_lower_case=False, use_fast=False, keep_accents=True)
39
+ # Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicParaphraseGenerationSS", do_lower_case=False, use_fast=False, keep_accents=True)
40
+ model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicParaphraseGenerationSS")
41
+ # Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicParaphraseGenerationSS")
 
 
 
 
 
42
  # Some initial mapping
43
  bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
44
  eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
45
  pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
46
  # To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
47
+ # First tokenize the input and outputs. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
 
48
  inp = tokenizer("I am a boy </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[ 466, 1981, 80, 25573, 64001, 64004]])
49
 
 
 
 
 
 
 
 
 
 
 
50
  # For generation. Pardon the messiness. Note the decoder_start_token_id.
 
51
  model.eval() # Set dropouts to zero
52
+ model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
 
 
 
53
  # Decode to get output strings
 
54
  decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
 
55
  print(decoded_output) # I am a boy
56
+ # Note that if your output language is not Hindi or Marathi, you should convert its script from Devanagari to the desired language using the Indic NLP Library.
57
  # What if we mask?
 
58
  inp = tokenizer("I am [MASK] </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
59
+ model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
 
 
60
  decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
 
61
  print(decoded_output) # I am happy
62
+ inp = tokenizer("मैं [MASK] हूँ </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
63
+ model_output=model.generate(inp, use_cache=True, num_beams=4,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
 
 
 
64
  decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
65
+ print(decoded_output) # मैं जानता हूँ
66
+ inp = tokenizer("मला [MASK] पाहिजे </s> <2mr>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
67
+ model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3,num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
 
 
 
 
68
  decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
69
+ print(decoded_output) # मला ओळखलं पाहिजे
70
+ ```
71
 
72
+ ## Benchmarks
73
 
74
+ Scores on the `IndicParaphrase` test sets are as follows:
75
 
76
+ Language | RougeL
77
+ ---------|----------------------------
78
+ as | 20.73
79
+ bn | 30.38
80
+ gu | 28.13
81
+ hi | 34.42
82
+ kn | 23.77
83
+ ml | 22.24
84
+ mr | 23.62
85
+ or | 27.53
86
+ pa | 32.53
87
+ ta | 23.49
88
+ te | 25.81
89
 
 
 
90
 
 
 
 
 
 
 
 
 
 
91
 
92
+ ## Citation
93
+
94
+ If you use this model, please cite the following paper:
95
  ```
96
+ @inproceedings{Kumar2022IndicNLGSM,
97
+ title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
98
+ author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
99
+ year={2022},
100
+ url = "https://arxiv.org/abs/2203.05437"
101
+ }
 
 
102
  ```
 
103
  # License
104
  The model is available under the MIT License.