julien-c HF staff commited on
Commit
4bb6651
1 Parent(s): a374198

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/bert-base-multilingual-uncased-README.md

Files changed (1) hide show
  1. README.md +209 -0
README.md ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ datasets:
5
+ - wikipedia
6
+ ---
7
+
8
+ # BERT multilingual base model (uncased)
9
+
10
+ Pretrained model on the top 102 languages with the largest Wikipedia using a masked language modeling (MLM) objective.
11
+ It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in
12
+ [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
13
+ between english and English.
14
+
15
+ Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
16
+ the Hugging Face team.
17
+
18
+ ## Model description
19
+
20
+ BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means
21
+ it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
22
+ publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
23
+ was pretrained with two objectives:
24
+
25
+ - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
26
+ the entire masked sentence through the model and has to predict the masked words. This is different from traditional
27
+ recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
28
+ GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
29
+ sentence.
30
+ - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
31
+ they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
32
+ predict if the two sentences were following each other or not.
33
+
34
+ This way, the model learns an inner representation of the languages in the training set that can then be used to
35
+ extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a
36
+ standard classifier using the features produced by the BERT model as inputs.
37
+
38
+ ## Intended uses & limitations
39
+
40
+ You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
41
+ be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
42
+ fine-tuned versions on a task that interests you.
43
+
44
+ Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
45
+ to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
46
+ generation you should look at model like GPT2.
47
+
48
+ ### How to use
49
+
50
+ You can use this model directly with a pipeline for masked language modeling:
51
+
52
+ ```python
53
+ >>> from transformers import pipeline
54
+ >>> unmasker = pipeline('fill-mask', model='bert-base-multilingual-uncased')
55
+ >>> unmasker("Hello I'm a [MASK] model.")
56
+
57
+ [{'sequence': "[CLS] hello i'm a top model. [SEP]",
58
+ 'score': 0.1507750153541565,
59
+ 'token': 11397,
60
+ 'token_str': 'top'},
61
+ {'sequence': "[CLS] hello i'm a fashion model. [SEP]",
62
+ 'score': 0.13075384497642517,
63
+ 'token': 23589,
64
+ 'token_str': 'fashion'},
65
+ {'sequence': "[CLS] hello i'm a good model. [SEP]",
66
+ 'score': 0.036272723227739334,
67
+ 'token': 12050,
68
+ 'token_str': 'good'},
69
+ {'sequence': "[CLS] hello i'm a new model. [SEP]",
70
+ 'score': 0.035954564809799194,
71
+ 'token': 10246,
72
+ 'token_str': 'new'},
73
+ {'sequence': "[CLS] hello i'm a great model. [SEP]",
74
+ 'score': 0.028643041849136353,
75
+ 'token': 11838,
76
+ 'token_str': 'great'}]
77
+ ```
78
+
79
+ Here is how to use this model to get the features of a given text in PyTorch:
80
+
81
+ ```python
82
+ from transformers import BertTokenizer, BertModel
83
+ tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-uncased')
84
+ model = BertModel.from_pretrained("bert-base-multilingual-uncased")
85
+ text = "Replace me by any text you'd like."
86
+ encoded_input = tokenizer(text, return_tensors='pt')
87
+ output = model(**encoded_input)
88
+ ```
89
+
90
+ and in TensorFlow:
91
+
92
+ ```python
93
+ from transformers import BertTokenizer, TFBertModel
94
+ tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-uncased')
95
+ model = TFBertModel.from_pretrained("bert-base-multilingual-uncased")
96
+ text = "Replace me by any text you'd like."
97
+ encoded_input = tokenizer(text, return_tensors='tf')
98
+ output = model(encoded_input)
99
+ ```
100
+
101
+ ### Limitations and bias
102
+
103
+ Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
104
+ predictions:
105
+
106
+ ```python
107
+ >>> from transformers import pipeline
108
+ >>> unmasker = pipeline('fill-mask', model='bert-base-multilingual-uncased')
109
+ >>> unmasker("The man worked as a [MASK].")
110
+
111
+ [{'sequence': '[CLS] the man worked as a teacher. [SEP]',
112
+ 'score': 0.07943806052207947,
113
+ 'token': 21733,
114
+ 'token_str': 'teacher'},
115
+ {'sequence': '[CLS] the man worked as a lawyer. [SEP]',
116
+ 'score': 0.0629938617348671,
117
+ 'token': 34249,
118
+ 'token_str': 'lawyer'},
119
+ {'sequence': '[CLS] the man worked as a farmer. [SEP]',
120
+ 'score': 0.03367974981665611,
121
+ 'token': 36799,
122
+ 'token_str': 'farmer'},
123
+ {'sequence': '[CLS] the man worked as a journalist. [SEP]',
124
+ 'score': 0.03172805905342102,
125
+ 'token': 19477,
126
+ 'token_str': 'journalist'},
127
+ {'sequence': '[CLS] the man worked as a carpenter. [SEP]',
128
+ 'score': 0.031021825969219208,
129
+ 'token': 33241,
130
+ 'token_str': 'carpenter'}]
131
+
132
+ >>> unmasker("The Black woman worked as a [MASK].")
133
+
134
+ [{'sequence': '[CLS] the black woman worked as a nurse. [SEP]',
135
+ 'score': 0.07045423984527588,
136
+ 'token': 52428,
137
+ 'token_str': 'nurse'},
138
+ {'sequence': '[CLS] the black woman worked as a teacher. [SEP]',
139
+ 'score': 0.05178029090166092,
140
+ 'token': 21733,
141
+ 'token_str': 'teacher'},
142
+ {'sequence': '[CLS] the black woman worked as a lawyer. [SEP]',
143
+ 'score': 0.032601192593574524,
144
+ 'token': 34249,
145
+ 'token_str': 'lawyer'},
146
+ {'sequence': '[CLS] the black woman worked as a slave. [SEP]',
147
+ 'score': 0.030507225543260574,
148
+ 'token': 31173,
149
+ 'token_str': 'slave'},
150
+ {'sequence': '[CLS] the black woman worked as a woman. [SEP]',
151
+ 'score': 0.027691684663295746,
152
+ 'token': 14050,
153
+ 'token_str': 'woman'}]
154
+ ```
155
+
156
+ This bias will also affect all fine-tuned versions of this model.
157
+
158
+ ## Training data
159
+
160
+ The BERT model was pretrained on the 102 languages with the largest Wikipedias. You can find the complete list
161
+ [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages).
162
+
163
+ ## Training procedure
164
+
165
+ ### Preprocessing
166
+
167
+ The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a
168
+ larger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,
169
+ Japanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character.
170
+
171
+ The inputs of the model are then of the form:
172
+
173
+ ```
174
+ [CLS] Sentence A [SEP] Sentence B [SEP]
175
+ ```
176
+
177
+ With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
178
+ the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
179
+ consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
180
+ "sentences" has a combined length of less than 512 tokens.
181
+
182
+ The details of the masking procedure for each sentence are the following:
183
+ - 15% of the tokens are masked.
184
+ - In 80% of the cases, the masked tokens are replaced by `[MASK]`.
185
+ - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
186
+ - In the 10% remaining cases, the masked tokens are left as is.
187
+
188
+
189
+ ### BibTeX entry and citation info
190
+
191
+ ```bibtex
192
+ @article{DBLP:journals/corr/abs-1810-04805,
193
+ author = {Jacob Devlin and
194
+ Ming{-}Wei Chang and
195
+ Kenton Lee and
196
+ Kristina Toutanova},
197
+ title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
198
+ Understanding},
199
+ journal = {CoRR},
200
+ volume = {abs/1810.04805},
201
+ year = {2018},
202
+ url = {http://arxiv.org/abs/1810.04805},
203
+ archivePrefix = {arXiv},
204
+ eprint = {1810.04805},
205
+ timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
206
+ biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
207
+ bibsource = {dblp computer science bibliography, https://dblp.org}
208
+ }
209
+ ```