pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fill-mask | transformers |
# ALBERT Base v1
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-base-v1')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"▁modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"▁modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"▁model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"▁runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"▁lingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v1')
model = AlbertModel.from_pretrained("albert-base-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v1')
model = TFAlbertModel.from_pretrained("albert-base-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-base-v1')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"▁chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"▁janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"▁shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"▁blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"▁lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"▁receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"▁janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"▁paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"▁chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"▁waitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=albert-base-v1">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]} | albert/albert-base-v1 | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1909.11942"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #safetensors #albert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ALBERT Base v1
==============
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
* 12 repeating layers
* 128 embedding dimension
* 768 hidden dimension
* 12 attention heads
* 11M parameters
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Evaluation results
------------------
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] |
fill-mask | transformers |
# ALBERT Base v2
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-base-v2')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"▁modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"▁modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"▁model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"▁runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"▁lingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertModel.from_pretrained("albert-base-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = TFAlbertModel.from_pretrained("albert-base-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-base-v2')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"▁chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"▁janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"▁shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"▁blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"▁lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"▁receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"▁janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"▁paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"▁chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"▁waitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | albert/albert-base-v2 | null | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1909.11942"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #rust #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ALBERT Base v2
==============
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
* 12 repeating layers
* 128 embedding dimension
* 768 hidden dimension
* 12 attention heads
* 11M parameters
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Evaluation results
------------------
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
### BibTeX entry and citation info
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] |
fill-mask | transformers |
# ALBERT Large v1
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the large model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 24 repeating layers
- 128 embedding dimension
- 1024 hidden dimension
- 16 attention heads
- 17M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-large-v1')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-large-v1')
model = AlbertModel.from_pretrained("albert-large-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-large-v1')
model = TFAlbertModel.from_pretrained("albert-large-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-large-v1')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | albert/albert-large-v1 | null | [
"transformers",
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1909.11942"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ALBERT Large v1
===============
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the large model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
* 24 repeating layers
* 128 embedding dimension
* 1024 hidden dimension
* 16 attention heads
* 17M parameters
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Evaluation results
------------------
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
### BibTeX entry and citation info
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] |
fill-mask | transformers |
# ALBERT Large v2
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the second version of the large model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 24 repeating layers
- 128 embedding dimension
- 1024 hidden dimension
- 16 attention heads
- 17M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-large-v2')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-large-v2')
model = AlbertModel.from_pretrained("albert-large-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-large-v2')
model = TFAlbertModel.from_pretrained("albert-large-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-large-v2')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | albert/albert-large-v2 | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1909.11942"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ALBERT Large v2
===============
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the second version of the large model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
* 24 repeating layers
* 128 embedding dimension
* 1024 hidden dimension
* 16 attention heads
* 17M parameters
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Evaluation results
------------------
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
### BibTeX entry and citation info
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] |
fill-mask | transformers |
# ALBERT XLarge v1
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the xlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 24 repeating layers
- 128 embedding dimension
- 2048 hidden dimension
- 16 attention heads
- 58M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xlarge-v1')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v1')
model = AlbertModel.from_pretrained("albert-xlarge-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v1')
model = TFAlbertModel.from_pretrained("albert-xlarge-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xlarge-v1')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | albert/albert-xlarge-v1 | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1909.11942"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ALBERT XLarge v1
================
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the xlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
* 24 repeating layers
* 128 embedding dimension
* 2048 hidden dimension
* 16 attention heads
* 58M parameters
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Evaluation results
------------------
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
### BibTeX entry and citation info
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] |
fill-mask | transformers |
# ALBERT XLarge v2
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the second version of the xlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 24 repeating layers
- 128 embedding dimension
- 2048 hidden dimension
- 16 attention heads
- 58M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xlarge-v2')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v2')
model = AlbertModel.from_pretrained("albert-xlarge-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v2')
model = TFAlbertModel.from_pretrained("albert-xlarge-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xlarge-v2')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | albert/albert-xlarge-v2 | null | [
"transformers",
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1909.11942"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ALBERT XLarge v2
================
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the second version of the xlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
* 24 repeating layers
* 128 embedding dimension
* 2048 hidden dimension
* 16 attention heads
* 58M parameters
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Evaluation results
------------------
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
### BibTeX entry and citation info
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] |
fill-mask | transformers |
# ALBERT XXLarge v1
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the xxlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 4096 hidden dimension
- 64 attention heads
- 223M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v1')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v1')
model = AlbertModel.from_pretrained("albert-xxlarge-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v1')
model = TFAlbertModel.from_pretrained("albert-xxlarge-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v1')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | albert/albert-xxlarge-v1 | null | [
"transformers",
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1909.11942"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ALBERT XXLarge v1
=================
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the xxlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
* 12 repeating layers
* 128 embedding dimension
* 4096 hidden dimension
* 64 attention heads
* 223M parameters
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Evaluation results
------------------
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
### BibTeX entry and citation info
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] |
fill-mask | transformers |
# ALBERT XXLarge v2
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the second version of the xxlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 4096 hidden dimension
- 64 attention heads
- 223M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v2')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v2')
model = AlbertModel.from_pretrained("albert-xxlarge-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v2')
model = TFAlbertModel.from_pretrained("albert-xxlarge-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v2')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=albert-xxlarge-v2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a> | {"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]} | albert/albert-xxlarge-v2 | null | [
"transformers",
"pytorch",
"tf",
"rust",
"safetensors",
"albert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1909.11942"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #rust #safetensors #albert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ALBERT XXLarge v2
=================
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the second version of the xxlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
* 12 repeating layers
* 128 embedding dimension
* 4096 hidden dimension
* 64 attention heads
* 223M parameters
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Evaluation results
------------------
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] | [
"TAGS\n#transformers #pytorch #tf #rust #safetensors #albert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] |
fill-mask | transformers |
# BERT base model (cased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is case-sensitive: it makes a difference between
english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-cased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] Hello I'm a fashion model. [SEP]",
'score': 0.09019174426794052,
'token': 4633,
'token_str': 'fashion'},
{'sequence': "[CLS] Hello I'm a new model. [SEP]",
'score': 0.06349995732307434,
'token': 1207,
'token_str': 'new'},
{'sequence': "[CLS] Hello I'm a male model. [SEP]",
'score': 0.06228214129805565,
'token': 2581,
'token_str': 'male'},
{'sequence': "[CLS] Hello I'm a professional model. [SEP]",
'score': 0.0441727414727211,
'token': 1848,
'token_str': 'professional'},
{'sequence': "[CLS] Hello I'm a super model. [SEP]",
'score': 0.03326151892542839,
'token': 7688,
'token_str': 'super'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = BertModel.from_pretrained("bert-base-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = TFBertModel.from_pretrained("bert-base-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-cased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] The man worked as a lawyer. [SEP]',
'score': 0.04804691672325134,
'token': 4545,
'token_str': 'lawyer'},
{'sequence': '[CLS] The man worked as a waiter. [SEP]',
'score': 0.037494491785764694,
'token': 17989,
'token_str': 'waiter'},
{'sequence': '[CLS] The man worked as a cop. [SEP]',
'score': 0.035512614995241165,
'token': 9947,
'token_str': 'cop'},
{'sequence': '[CLS] The man worked as a detective. [SEP]',
'score': 0.031271643936634064,
'token': 9140,
'token_str': 'detective'},
{'sequence': '[CLS] The man worked as a doctor. [SEP]',
'score': 0.027423162013292313,
'token': 3995,
'token_str': 'doctor'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] The woman worked as a nurse. [SEP]',
'score': 0.16927455365657806,
'token': 7439,
'token_str': 'nurse'},
{'sequence': '[CLS] The woman worked as a waitress. [SEP]',
'score': 0.1501094549894333,
'token': 15098,
'token_str': 'waitress'},
{'sequence': '[CLS] The woman worked as a maid. [SEP]',
'score': 0.05600163713097572,
'token': 13487,
'token_str': 'maid'},
{'sequence': '[CLS] The woman worked as a housekeeper. [SEP]',
'score': 0.04838843643665314,
'token': 26458,
'token_str': 'housekeeper'},
{'sequence': '[CLS] The woman worked as a cook. [SEP]',
'score': 0.029980547726154327,
'token': 9834,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-cased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]} | google-bert/bert-base-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1810.04805"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| BERT base model (cased)
=======================
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is case-sensitive: it makes a difference between
english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \(\beta\_{1} = 0.9\) and \(\beta\_{2} = 0.999\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
Evaluation results
------------------
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nGlue test results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nGlue test results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] |
fill-mask | transformers |
# Bert-base-chinese
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
### Model Description
This model has been pre-trained for Chinese, training and random input masking has been applied independently to word pieces (as in the original BERT paper).
- **Developed by:** HuggingFace team
- **Model Type:** Fill-Mask
- **Language(s):** Chinese
- **License:** [More Information needed]
- **Parent Model:** See the [BERT base uncased model](https://huggingface.co/bert-base-uncased) for more information about the BERT base model.
### Model Sources
- **Paper:** [BERT](https://arxiv.org/abs/1810.04805)
## Uses
#### Direct Use
This model can be used for masked language modeling
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Training
#### Training Procedure
* **type_vocab_size:** 2
* **vocab_size:** 21128
* **num_hidden_layers:** 12
#### Training Data
[More Information Needed]
## Evaluation
#### Results
[More Information Needed]
## How to Get Started With the Model
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
model = AutoModelForMaskedLM.from_pretrained("bert-base-chinese")
```
| {"language": "zh"} | google-bert/bert-base-chinese | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"zh",
"arxiv:1810.04805",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1810.04805"
] | [
"zh"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #zh #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Bert-base-chinese
## Table of Contents
- Model Details
- Uses
- Risks, Limitations and Biases
- Training
- Evaluation
- How to Get Started With the Model
## Model Details
### Model Description
This model has been pre-trained for Chinese, training and random input masking has been applied independently to word pieces (as in the original BERT paper).
- Developed by: HuggingFace team
- Model Type: Fill-Mask
- Language(s): Chinese
- License: [More Information needed]
- Parent Model: See the BERT base uncased model for more information about the BERT base model.
### Model Sources
- Paper: BERT
## Uses
#### Direct Use
This model can be used for masked language modeling
## Risks, Limitations and Biases
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
## Training
#### Training Procedure
* type_vocab_size: 2
* vocab_size: 21128
* num_hidden_layers: 12
#### Training Data
## Evaluation
#### Results
## How to Get Started With the Model
| [
"# Bert-base-chinese",
"## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- How to Get Started With the Model",
"## Model Details",
"### Model Description\n\nThis model has been pre-trained for Chinese, training and random input masking has been applied independently to word pieces (as in the original BERT paper).\n\n- Developed by: HuggingFace team\n- Model Type: Fill-Mask\n- Language(s): Chinese\n- License: [More Information needed]\n- Parent Model: See the BERT base uncased model for more information about the BERT base model.",
"### Model Sources\n- Paper: BERT",
"## Uses",
"#### Direct Use\n\nThis model can be used for masked language modeling",
"## Risks, Limitations and Biases\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Training",
"#### Training Procedure\n* type_vocab_size: 2\n* vocab_size: 21128\n* num_hidden_layers: 12",
"#### Training Data",
"## Evaluation",
"#### Results",
"## How to Get Started With the Model"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #zh #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Bert-base-chinese",
"## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- How to Get Started With the Model",
"## Model Details",
"### Model Description\n\nThis model has been pre-trained for Chinese, training and random input masking has been applied independently to word pieces (as in the original BERT paper).\n\n- Developed by: HuggingFace team\n- Model Type: Fill-Mask\n- Language(s): Chinese\n- License: [More Information needed]\n- Parent Model: See the BERT base uncased model for more information about the BERT base model.",
"### Model Sources\n- Paper: BERT",
"## Uses",
"#### Direct Use\n\nThis model can be used for masked language modeling",
"## Risks, Limitations and Biases\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Training",
"#### Training Procedure\n* type_vocab_size: 2\n* vocab_size: 21128\n* num_hidden_layers: 12",
"#### Training Data",
"## Evaluation",
"#### Results",
"## How to Get Started With the Model"
] |
fill-mask | transformers |
<a href="https://huggingface.co/exbert/?model=bert-base-german-cased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
# German BERT
![bert_image](https://static.tildacdn.com/tild6438-3730-4164-b266-613634323466/german_bert.png)
## Overview
**Language model:** bert-base-cased
**Language:** German
**Training data:** Wiki, OpenLegalData, News (~ 12GB)
**Eval data:** Conll03 (NER), GermEval14 (NER), GermEval18 (Classification), GNAD (Classification)
**Infrastructure**: 1x TPU v2
**Published**: Jun 14th, 2019
**Update April 3rd, 2020**: we updated the vocabulary file on deepset's s3 to conform with the default tokenization of punctuation tokens.
For details see the related [FARM issue](https://github.com/deepset-ai/FARM/issues/60). If you want to use the old vocab we have also uploaded a ["deepset/bert-base-german-cased-oldvocab"](https://huggingface.co/deepset/bert-base-german-cased-oldvocab) model.
## Details
- We trained using Google's Tensorflow code on a single cloud TPU v2 with standard settings.
- We trained 810k steps with a batch size of 1024 for sequence length 128 and 30k steps with sequence length 512. Training took about 9 days.
- As training data we used the latest German Wikipedia dump (6GB of raw txt files), the OpenLegalData dump (2.4 GB) and news articles (3.6 GB).
- We cleaned the data dumps with tailored scripts and segmented sentences with spacy v2.1. To create tensorflow records we used the recommended sentencepiece library for creating the word piece vocabulary and tensorflow scripts to convert the text to data usable by BERT.
See https://deepset.ai/german-bert for more details
## Hyperparameters
```
batch_size = 1024
n_steps = 810_000
max_seq_len = 128 (and 512 later)
learning_rate = 1e-4
lr_schedule = LinearWarmup
num_warmup_steps = 10_000
```
## Performance
During training we monitored the loss and evaluated different model checkpoints on the following German datasets:
- germEval18Fine: Macro f1 score for multiclass sentiment classification
- germEval18coarse: Macro f1 score for binary sentiment classification
- germEval14: Seq f1 score for NER (file names deuutf.\*)
- CONLL03: Seq f1 score for NER
- 10kGNAD: Accuracy for document classification
Even without thorough hyperparameter tuning, we observed quite stable learning especially for our German model. Multiple restarts with different seeds produced quite similar results.
![performancetable](https://thumb.tildacdn.com/tild3162-6462-4566-b663-376630376138/-/format/webp/Screenshot_from_2020.png)
We further evaluated different points during the 9 days of pre-training and were astonished how fast the model converges to the maximally reachable performance. We ran all 5 downstream tasks on 7 different model checkpoints - taken at 0 up to 840k training steps (x-axis in figure below). Most checkpoints are taken from early training where we expected most performance changes. Surprisingly, even a randomly initialized BERT can be trained only on labeled downstream datasets and reach good performance (blue line, GermEval 2018 Coarse task, 795 kB trainset size).
![checkpointseval](https://thumb.tildacdn.com/tild6335-3531-4137-b533-313365663435/-/format/webp/deepset_checkpoints.png)
## Authors
- Branden Chan: `branden.chan [at] deepset.ai`
- Timo Möller: `timo.moeller [at] deepset.ai`
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
- Tanay Soni: `tanay.soni [at] deepset.ai`
## About us
![deepset logo](https://raw.githubusercontent.com/deepset-ai/FARM/master/docs/img/deepset_logo.png)
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai)
| {"language": "de", "license": "mit", "tags": ["exbert"], "thumbnail": "https://static.tildacdn.com/tild6438-3730-4164-b266-613634323466/german_bert.png"} | google-bert/bert-base-german-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"bert",
"fill-mask",
"exbert",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"de"
] | TAGS
#transformers #pytorch #tf #jax #onnx #safetensors #bert #fill-mask #exbert #de #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
<a href="URL
<img width="300px" src="URL
</a>
# German BERT
!bert_image
## Overview
Language model: bert-base-cased
Language: German
Training data: Wiki, OpenLegalData, News (~ 12GB)
Eval data: Conll03 (NER), GermEval14 (NER), GermEval18 (Classification), GNAD (Classification)
Infrastructure: 1x TPU v2
Published: Jun 14th, 2019
Update April 3rd, 2020: we updated the vocabulary file on deepset's s3 to conform with the default tokenization of punctuation tokens.
For details see the related FARM issue. If you want to use the old vocab we have also uploaded a "deepset/bert-base-german-cased-oldvocab" model.
## Details
- We trained using Google's Tensorflow code on a single cloud TPU v2 with standard settings.
- We trained 810k steps with a batch size of 1024 for sequence length 128 and 30k steps with sequence length 512. Training took about 9 days.
- As training data we used the latest German Wikipedia dump (6GB of raw txt files), the OpenLegalData dump (2.4 GB) and news articles (3.6 GB).
- We cleaned the data dumps with tailored scripts and segmented sentences with spacy v2.1. To create tensorflow records we used the recommended sentencepiece library for creating the word piece vocabulary and tensorflow scripts to convert the text to data usable by BERT.
See URL for more details
## Hyperparameters
## Performance
During training we monitored the loss and evaluated different model checkpoints on the following German datasets:
- germEval18Fine: Macro f1 score for multiclass sentiment classification
- germEval18coarse: Macro f1 score for binary sentiment classification
- germEval14: Seq f1 score for NER (file names deuutf.\*)
- CONLL03: Seq f1 score for NER
- 10kGNAD: Accuracy for document classification
Even without thorough hyperparameter tuning, we observed quite stable learning especially for our German model. Multiple restarts with different seeds produced quite similar results.
!performancetable
We further evaluated different points during the 9 days of pre-training and were astonished how fast the model converges to the maximally reachable performance. We ran all 5 downstream tasks on 7 different model checkpoints - taken at 0 up to 840k training steps (x-axis in figure below). Most checkpoints are taken from early training where we expected most performance changes. Surprisingly, even a randomly initialized BERT can be trained only on labeled downstream datasets and reach good performance (blue line, GermEval 2018 Coarse task, 795 kB trainset size).
!checkpointseval
## Authors
- Branden Chan: 'URL [at] URL'
- Timo Möller: 'timo.moeller [at] URL'
- Malte Pietsch: 'malte.pietsch [at] URL'
- Tanay Soni: 'URL [at] URL'
## About us
!deepset logo
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- FARM
- Haystack
Get in touch:
Twitter | LinkedIn | Website
| [
"# German BERT\n!bert_image",
"## Overview\nLanguage model: bert-base-cased \nLanguage: German \nTraining data: Wiki, OpenLegalData, News (~ 12GB) \nEval data: Conll03 (NER), GermEval14 (NER), GermEval18 (Classification), GNAD (Classification) \nInfrastructure: 1x TPU v2 \nPublished: Jun 14th, 2019\n\nUpdate April 3rd, 2020: we updated the vocabulary file on deepset's s3 to conform with the default tokenization of punctuation tokens. \nFor details see the related FARM issue. If you want to use the old vocab we have also uploaded a \"deepset/bert-base-german-cased-oldvocab\" model.",
"## Details\n- We trained using Google's Tensorflow code on a single cloud TPU v2 with standard settings.\n- We trained 810k steps with a batch size of 1024 for sequence length 128 and 30k steps with sequence length 512. Training took about 9 days.\n- As training data we used the latest German Wikipedia dump (6GB of raw txt files), the OpenLegalData dump (2.4 GB) and news articles (3.6 GB).\n- We cleaned the data dumps with tailored scripts and segmented sentences with spacy v2.1. To create tensorflow records we used the recommended sentencepiece library for creating the word piece vocabulary and tensorflow scripts to convert the text to data usable by BERT.\n\n\nSee URL for more details",
"## Hyperparameters",
"## Performance\n\nDuring training we monitored the loss and evaluated different model checkpoints on the following German datasets:\n\n- germEval18Fine: Macro f1 score for multiclass sentiment classification\n- germEval18coarse: Macro f1 score for binary sentiment classification\n- germEval14: Seq f1 score for NER (file names deuutf.\\*)\n- CONLL03: Seq f1 score for NER\n- 10kGNAD: Accuracy for document classification\n\nEven without thorough hyperparameter tuning, we observed quite stable learning especially for our German model. Multiple restarts with different seeds produced quite similar results.\n \n!performancetable \n\nWe further evaluated different points during the 9 days of pre-training and were astonished how fast the model converges to the maximally reachable performance. We ran all 5 downstream tasks on 7 different model checkpoints - taken at 0 up to 840k training steps (x-axis in figure below). Most checkpoints are taken from early training where we expected most performance changes. Surprisingly, even a randomly initialized BERT can be trained only on labeled downstream datasets and reach good performance (blue line, GermEval 2018 Coarse task, 795 kB trainset size).\n\n!checkpointseval",
"## Authors\n- Branden Chan: 'URL [at] URL'\n- Timo Möller: 'timo.moeller [at] URL'\n- Malte Pietsch: 'malte.pietsch [at] URL'\n- Tanay Soni: 'URL [at] URL'",
"## About us\n!deepset logo\n\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Website"
] | [
"TAGS\n#transformers #pytorch #tf #jax #onnx #safetensors #bert #fill-mask #exbert #de #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# German BERT\n!bert_image",
"## Overview\nLanguage model: bert-base-cased \nLanguage: German \nTraining data: Wiki, OpenLegalData, News (~ 12GB) \nEval data: Conll03 (NER), GermEval14 (NER), GermEval18 (Classification), GNAD (Classification) \nInfrastructure: 1x TPU v2 \nPublished: Jun 14th, 2019\n\nUpdate April 3rd, 2020: we updated the vocabulary file on deepset's s3 to conform with the default tokenization of punctuation tokens. \nFor details see the related FARM issue. If you want to use the old vocab we have also uploaded a \"deepset/bert-base-german-cased-oldvocab\" model.",
"## Details\n- We trained using Google's Tensorflow code on a single cloud TPU v2 with standard settings.\n- We trained 810k steps with a batch size of 1024 for sequence length 128 and 30k steps with sequence length 512. Training took about 9 days.\n- As training data we used the latest German Wikipedia dump (6GB of raw txt files), the OpenLegalData dump (2.4 GB) and news articles (3.6 GB).\n- We cleaned the data dumps with tailored scripts and segmented sentences with spacy v2.1. To create tensorflow records we used the recommended sentencepiece library for creating the word piece vocabulary and tensorflow scripts to convert the text to data usable by BERT.\n\n\nSee URL for more details",
"## Hyperparameters",
"## Performance\n\nDuring training we monitored the loss and evaluated different model checkpoints on the following German datasets:\n\n- germEval18Fine: Macro f1 score for multiclass sentiment classification\n- germEval18coarse: Macro f1 score for binary sentiment classification\n- germEval14: Seq f1 score for NER (file names deuutf.\\*)\n- CONLL03: Seq f1 score for NER\n- 10kGNAD: Accuracy for document classification\n\nEven without thorough hyperparameter tuning, we observed quite stable learning especially for our German model. Multiple restarts with different seeds produced quite similar results.\n \n!performancetable \n\nWe further evaluated different points during the 9 days of pre-training and were astonished how fast the model converges to the maximally reachable performance. We ran all 5 downstream tasks on 7 different model checkpoints - taken at 0 up to 840k training steps (x-axis in figure below). Most checkpoints are taken from early training where we expected most performance changes. Surprisingly, even a randomly initialized BERT can be trained only on labeled downstream datasets and reach good performance (blue line, GermEval 2018 Coarse task, 795 kB trainset size).\n\n!checkpointseval",
"## Authors\n- Branden Chan: 'URL [at] URL'\n- Timo Möller: 'timo.moeller [at] URL'\n- Malte Pietsch: 'malte.pietsch [at] URL'\n- Tanay Soni: 'URL [at] URL'",
"## About us\n!deepset logo\n\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Website"
] |
fill-mask | transformers |
This model is the same as [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased). See the [dbmdz/bert-base-german-cased model card](https://huggingface.co/dbmdz/bert-base-german-cased) for details on the model. | {"language": "de", "license": "mit"} | google-bert/bert-base-german-dbmdz-cased | null | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"de"
] | TAGS
#transformers #pytorch #jax #bert #fill-mask #de #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
This model is the same as dbmdz/bert-base-german-cased. See the dbmdz/bert-base-german-cased model card for details on the model. | [] | [
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #de #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers |
This model is the same as [dbmdz/bert-base-german-uncased](https://huggingface.co/dbmdz/bert-base-german-uncased). See the [dbmdz/bert-base-german-cased model card](https://huggingface.co/dbmdz/bert-base-german-uncased) for details on the model.
| {"language": "de", "license": "mit"} | google-bert/bert-base-german-dbmdz-uncased | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"de"
] | TAGS
#transformers #pytorch #jax #safetensors #bert #fill-mask #de #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
This model is the same as dbmdz/bert-base-german-uncased. See the dbmdz/bert-base-german-cased model card for details on the model.
| [] | [
"TAGS\n#transformers #pytorch #jax #safetensors #bert #fill-mask #de #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers |
# BERT multilingual base model (cased)
Pretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective.
It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is case sensitive: it makes a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the languages in the training set that can then be used to
extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a
standard classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-multilingual-cased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] Hello I'm a model model. [SEP]",
'score': 0.10182085633277893,
'token': 13192,
'token_str': 'model'},
{'sequence': "[CLS] Hello I'm a world model. [SEP]",
'score': 0.052126359194517136,
'token': 11356,
'token_str': 'world'},
{'sequence': "[CLS] Hello I'm a data model. [SEP]",
'score': 0.048930276185274124,
'token': 11165,
'token_str': 'data'},
{'sequence': "[CLS] Hello I'm a flight model. [SEP]",
'score': 0.02036019042134285,
'token': 23578,
'token_str': 'flight'},
{'sequence': "[CLS] Hello I'm a business model. [SEP]",
'score': 0.020079681649804115,
'token': 14155,
'token_str': 'business'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
model = BertModel.from_pretrained("bert-base-multilingual-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
model = TFBertModel.from_pretrained("bert-base-multilingual-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on the 104 languages with the largest Wikipedias. You can find the complete list
[here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a
larger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,
Japanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character.
The inputs of the model are then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| {"language": ["multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "mn", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "hr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "th", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo"], "license": "apache-2.0", "datasets": ["wikipedia"]} | google-bert/bert-base-multilingual-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"mn",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"th",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1810.04805"
] | [
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"mn",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"hr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"th",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds #lm #mk #mg #ms #ml #mr #mn #min #ne #new #nb #nn #oc #fa #pms #pl #pt #pa #ro #ru #sco #sr #scn #sk #sl #aze #es #su #sw #sv #tl #tg #th #ta #tt #te #tr #uk #ud #uz #vi #vo #war #cy #fry #pnb #yo #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# BERT multilingual base model (cased)
Pretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective.
It was introduced in this paper and first released in
this repository. This model is case sensitive: it makes a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the languages in the training set that can then be used to
extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a
standard classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
## Training data
The BERT model was pretrained on the 104 languages with the largest Wikipedias. You can find the complete list
here.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a
larger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,
Japanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character.
The inputs of the model are then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### BibTeX entry and citation info
| [
"# BERT multilingual base model (cased)\n\nPretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective.\nIt was introduced in this paper and first released in\nthis repository. This model is case sensitive: it makes a difference\nbetween english and English.\n\nDisclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by\nthe Hugging Face team.",
"## Model description\n\nBERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means\nit was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n\nThis way, the model learns an inner representation of the languages in the training set that can then be used to\nextract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a\nstandard classifier using the features produced by the BERT model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on the 104 languages with the largest Wikipedias. You can find the complete list\nhere.",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a\nlarger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,\nJapanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character. \n\nThe inputs of the model are then of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds #lm #mk #mg #ms #ml #mr #mn #min #ne #new #nb #nn #oc #fa #pms #pl #pt #pa #ro #ru #sco #sr #scn #sk #sl #aze #es #su #sw #sv #tl #tg #th #ta #tt #te #tr #uk #ud #uz #vi #vo #war #cy #fry #pnb #yo #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# BERT multilingual base model (cased)\n\nPretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective.\nIt was introduced in this paper and first released in\nthis repository. This model is case sensitive: it makes a difference\nbetween english and English.\n\nDisclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by\nthe Hugging Face team.",
"## Model description\n\nBERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means\nit was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n\nThis way, the model learns an inner representation of the languages in the training set that can then be used to\nextract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a\nstandard classifier using the features produced by the BERT model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on the 104 languages with the largest Wikipedias. You can find the complete list\nhere.",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a\nlarger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,\nJapanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character. \n\nThe inputs of the model are then of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### BibTeX entry and citation info"
] |
fill-mask | transformers |
# BERT multilingual base model (uncased)
Pretrained model on the top 102 languages with the largest Wikipedia using a masked language modeling (MLM) objective.
It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the languages in the training set that can then be used to
extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a
standard classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-multilingual-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a top model. [SEP]",
'score': 0.1507750153541565,
'token': 11397,
'token_str': 'top'},
{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.13075384497642517,
'token': 23589,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a good model. [SEP]",
'score': 0.036272723227739334,
'token': 12050,
'token_str': 'good'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.035954564809799194,
'token': 10246,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a great model. [SEP]",
'score': 0.028643041849136353,
'token': 11838,
'token_str': 'great'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-uncased')
model = BertModel.from_pretrained("bert-base-multilingual-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-uncased')
model = TFBertModel.from_pretrained("bert-base-multilingual-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-multilingual-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a teacher. [SEP]',
'score': 0.07943806052207947,
'token': 21733,
'token_str': 'teacher'},
{'sequence': '[CLS] the man worked as a lawyer. [SEP]',
'score': 0.0629938617348671,
'token': 34249,
'token_str': 'lawyer'},
{'sequence': '[CLS] the man worked as a farmer. [SEP]',
'score': 0.03367974981665611,
'token': 36799,
'token_str': 'farmer'},
{'sequence': '[CLS] the man worked as a journalist. [SEP]',
'score': 0.03172805905342102,
'token': 19477,
'token_str': 'journalist'},
{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.031021825969219208,
'token': 33241,
'token_str': 'carpenter'}]
>>> unmasker("The Black woman worked as a [MASK].")
[{'sequence': '[CLS] the black woman worked as a nurse. [SEP]',
'score': 0.07045423984527588,
'token': 52428,
'token_str': 'nurse'},
{'sequence': '[CLS] the black woman worked as a teacher. [SEP]',
'score': 0.05178029090166092,
'token': 21733,
'token_str': 'teacher'},
{'sequence': '[CLS] the black woman worked as a lawyer. [SEP]',
'score': 0.032601192593574524,
'token': 34249,
'token_str': 'lawyer'},
{'sequence': '[CLS] the black woman worked as a slave. [SEP]',
'score': 0.030507225543260574,
'token': 31173,
'token_str': 'slave'},
{'sequence': '[CLS] the black woman worked as a woman. [SEP]',
'score': 0.027691684663295746,
'token': 14050,
'token_str': 'woman'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on the 102 languages with the largest Wikipedias. You can find the complete list
[here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a
larger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,
Japanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character.
The inputs of the model are then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| {"language": ["multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "hr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo"], "license": "apache-2.0", "datasets": ["wikipedia"]} | google-bert/bert-base-multilingual-uncased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1810.04805"
] | [
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"hr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds #lm #mk #mg #ms #ml #mr #min #ne #new #nb #nn #oc #fa #pms #pl #pt #pa #ro #ru #sco #sr #scn #sk #sl #aze #es #su #sw #sv #tl #tg #ta #tt #te #tr #uk #ud #uz #vi #vo #war #cy #fry #pnb #yo #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BERT multilingual base model (uncased)
Pretrained model on the top 102 languages with the largest Wikipedia using a masked language modeling (MLM) objective.
It was introduced in this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the languages in the training set that can then be used to
extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a
standard classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on the 102 languages with the largest Wikipedias. You can find the complete list
here.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a
larger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,
Japanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character.
The inputs of the model are then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### BibTeX entry and citation info
| [
"# BERT multilingual base model (uncased)\n\nPretrained model on the top 102 languages with the largest Wikipedia using a masked language modeling (MLM) objective.\nIt was introduced in this paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by\nthe Hugging Face team.",
"## Model description\n\nBERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means\nit was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n\nThis way, the model learns an inner representation of the languages in the training set that can then be used to\nextract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a\nstandard classifier using the features produced by the BERT model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\n\nThis bias will also affect all fine-tuned versions of this model.",
"## Training data\n\nThe BERT model was pretrained on the 102 languages with the largest Wikipedias. You can find the complete list\nhere.",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a\nlarger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,\nJapanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character. \n\nThe inputs of the model are then of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds #lm #mk #mg #ms #ml #mr #min #ne #new #nb #nn #oc #fa #pms #pl #pt #pa #ro #ru #sco #sr #scn #sk #sl #aze #es #su #sw #sv #tl #tg #ta #tt #te #tr #uk #ud #uz #vi #vo #war #cy #fry #pnb #yo #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT multilingual base model (uncased)\n\nPretrained model on the top 102 languages with the largest Wikipedia using a masked language modeling (MLM) objective.\nIt was introduced in this paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by\nthe Hugging Face team.",
"## Model description\n\nBERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means\nit was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n\nThis way, the model learns an inner representation of the languages in the training set that can then be used to\nextract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a\nstandard classifier using the features produced by the BERT model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\n\nThis bias will also affect all fine-tuned versions of this model.",
"## Training data\n\nThe BERT model was pretrained on the 102 languages with the largest Wikipedias. You can find the complete list\nhere.",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a\nlarger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,\nJapanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character. \n\nThe inputs of the model are then of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### BibTeX entry and citation info"
] |
fill-mask | transformers |
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Model variations
BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers.
Chinese and multilingual uncased and cased versions followed shortly after.
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.
Other 24 smaller models are released afterward.
The detailed release history can be found on the [google-research/bert readme](https://github.com/google-research/bert/blob/master/README.md) on github.
| Model | #params | Language |
|------------------------|--------------------------------|-------|
| [`bert-base-uncased`](https://huggingface.co/bert-base-uncased) | 110M | English |
| [`bert-large-uncased`](https://huggingface.co/bert-large-uncased) | 340M | English | sub
| [`bert-base-cased`](https://huggingface.co/bert-base-cased) | 110M | English |
| [`bert-large-cased`](https://huggingface.co/bert-large-cased) | 340M | English |
| [`bert-base-chinese`](https://huggingface.co/bert-base-chinese) | 110M | Chinese |
| [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) | 110M | Multiple |
| [`bert-large-uncased-whole-word-masking`](https://huggingface.co/bert-large-uncased-whole-word-masking) | 340M | English |
| [`bert-large-cased-whole-word-masking`](https://huggingface.co/bert-large-cased-whole-word-masking) | 340M | English |
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]} | google-bert/bert-base-uncased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"coreml",
"onnx",
"safetensors",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1810.04805"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #rust #coreml #onnx #safetensors #bert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| BERT base model (uncased)
=========================
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
Model variations
----------------
BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers.
Chinese and multilingual uncased and cased versions followed shortly after.
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.
Other 24 smaller models are released afterward.
The detailed release history can be found on the google-research/bert readme on github.
Model: 'bert-base-uncased', #params: 110M, Language: English
Model: 'bert-large-uncased', #params: 340M, Language: English
Model: 'bert-base-cased', #params: 110M, Language: English
Model: 'bert-large-cased', #params: 340M, Language: English
Model: 'bert-base-chinese', #params: 110M, Language: Chinese
Model: 'bert-base-multilingual-cased', #params: 110M, Language: Multiple
Model: 'bert-large-uncased-whole-word-masking', #params: 340M, Language: English
Model: 'bert-large-cased-whole-word-masking', #params: 340M, Language: English
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \(\beta\_{1} = 0.9\) and \(\beta\_{2} = 0.999\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
Evaluation results
------------------
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nGlue test results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #coreml #onnx #safetensors #bert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nGlue test results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] |
question-answering | transformers |
# BERT large model (cased) whole word masking finetuned on SQuAD
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is cased: it makes a difference between english and English.
Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
The training is identical -- each masked WordPiece token is predicted independently.
After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters.
## Intended uses & limitations
This model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the [task summary](https://huggingface.co/transformers/task_summary.html#extractive-question-answering) of the transformers documentation.## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### Fine-tuning
After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command:
```
python -m torch.distributed.launch --nproc_per_node=8 ./examples/question-answering/run_qa.py \
--model_name_or_path bert-large-cased-whole-word-masking \
--dataset_name squad \
--do_train \
--do_eval \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./examples/models/wwm_cased_finetuned_squad/ \
--per_device_eval_batch_size=3 \
--per_device_train_batch_size=3 \
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | google-bert/bert-large-cased-whole-word-masking-finetuned-squad | null | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1810.04805"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #rust #safetensors #bert #question-answering #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #endpoints_compatible #region-us
|
# BERT large model (cased) whole word masking finetuned on SQuAD
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is cased: it makes a difference between english and English.
Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
The training is identical -- each masked WordPiece token is predicted independently.
After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters.
## Intended uses & limitations
This model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the task summary of the transformers documentation.## Training data
The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### Fine-tuning
After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command:
### BibTeX entry and citation info
| [
"# BERT large model (cased) whole word masking finetuned on SQuAD\n\nPretrained model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is cased: it makes a difference between english and English.\n\nDifferently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.\n\nThe training is identical -- each masked WordPiece token is predicted independently. \n\nAfter pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning.\n\nDisclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by\nthe Hugging Face team.",
"## Model description\n\nBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.\n\nThis model has the following configuration:\n\n- 24-layer\n- 1024 hidden dimension\n- 16 attention heads\n- 336M parameters.",
"## Intended uses & limitations\nThis model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the task summary of the transformers documentation.## Training data\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### Fine-tuning\n\nAfter pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command:",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #bert #question-answering #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# BERT large model (cased) whole word masking finetuned on SQuAD\n\nPretrained model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is cased: it makes a difference between english and English.\n\nDifferently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.\n\nThe training is identical -- each masked WordPiece token is predicted independently. \n\nAfter pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning.\n\nDisclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by\nthe Hugging Face team.",
"## Model description\n\nBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.\n\nThis model has the following configuration:\n\n- 24-layer\n- 1024 hidden dimension\n- 16 attention heads\n- 336M parameters.",
"## Intended uses & limitations\nThis model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the task summary of the transformers documentation.## Training data\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### Fine-tuning\n\nAfter pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command:",
"### BibTeX entry and citation info"
] |
fill-mask | transformers |
# BERT large model (cased) whole word masking
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is cased: it makes a difference between english and English.
Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
The training is identical -- each masked WordPiece token is predicted independently.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-large-cased-whole-word-masking')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] Hello I'm a fashion model. [SEP]",
"score":0.1474294513463974,
"token":4633,
"token_str":"fashion"
},
{
"sequence":"[CLS] Hello I'm a magazine model. [SEP]",
"score":0.05430116504430771,
"token":2435,
"token_str":"magazine"
},
{
"sequence":"[CLS] Hello I'm a male model. [SEP]",
"score":0.039395421743392944,
"token":2581,
"token_str":"male"
},
{
"sequence":"[CLS] Hello I'm a former model. [SEP]",
"score":0.036936815828084946,
"token":1393,
"token_str":"former"
},
{
"sequence":"[CLS] Hello I'm a professional model. [SEP]",
"score":0.03663451969623566,
"token":1848,
"token_str":"professional"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-large-cased-whole-word-masking')
model = BertModel.from_pretrained("bert-large-cased-whole-word-masking")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-large-cased-whole-word-masking')
model = TFBertModel.from_pretrained("bert-large-cased-whole-word-masking")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-large-cased-whole-word-masking')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] The man worked as a carpenter. [SEP]",
"score":0.09021259099245071,
"token":25169,
"token_str":"carpenter"
},
{
"sequence":"[CLS] The man worked as a cook. [SEP]",
"score":0.08125395327806473,
"token":9834,
"token_str":"cook"
},
{
"sequence":"[CLS] The man worked as a mechanic. [SEP]",
"score":0.07524766772985458,
"token":19459,
"token_str":"mechanic"
},
{
"sequence":"[CLS] The man worked as a waiter. [SEP]",
"score":0.07397029548883438,
"token":17989,
"token_str":"waiter"
},
{
"sequence":"[CLS] The man worked as a guard. [SEP]",
"score":0.05848982185125351,
"token":3542,
"token_str":"guard"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] The woman worked as a maid. [SEP]",
"score":0.19436432421207428,
"token":13487,
"token_str":"maid"
},
{
"sequence":"[CLS] The woman worked as a waitress. [SEP]",
"score":0.16161060333251953,
"token":15098,
"token_str":"waitress"
},
{
"sequence":"[CLS] The woman worked as a nurse. [SEP]",
"score":0.14942803978919983,
"token":7439,
"token_str":"nurse"
},
{
"sequence":"[CLS] The woman worked as a secretary. [SEP]",
"score":0.10373266786336899,
"token":4848,
"token_str":"secretary"
},
{
"sequence":"[CLS] The woman worked as a cook. [SEP]",
"score":0.06384387612342834,
"token":9834,
"token_str":"cook"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Model | SQUAD 1.1 F1/EM | Multi NLI Accuracy
---------------------------------------- | :-------------: | :----------------:
BERT-Large, Cased (Whole Word Masking) | 92.9/86.7 | 86.46
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | google-bert/bert-large-cased-whole-word-masking | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1810.04805"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| BERT large model (cased) whole word masking
===========================================
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is cased: it makes a difference between english and English.
Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
The training is identical -- each masked WordPiece token is predicted independently.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
* 24-layer
* 1024 hidden dimension
* 16 attention heads
* 336M parameters.
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \(\beta\_{1} = 0.9\) and \(\beta\_{2} = 0.999\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
Evaluation results
------------------
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:",
"### BibTeX entry and citation info"
] |
fill-mask | transformers |
# BERT large model (cased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is cased: it makes a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-large-cased')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] Hello I'm a male model. [SEP]",
"score":0.22748498618602753,
"token":2581,
"token_str":"male"
},
{
"sequence":"[CLS] Hello I'm a fashion model. [SEP]",
"score":0.09146175533533096,
"token":4633,
"token_str":"fashion"
},
{
"sequence":"[CLS] Hello I'm a new model. [SEP]",
"score":0.05823173746466637,
"token":1207,
"token_str":"new"
},
{
"sequence":"[CLS] Hello I'm a super model. [SEP]",
"score":0.04488750174641609,
"token":7688,
"token_str":"super"
},
{
"sequence":"[CLS] Hello I'm a famous model. [SEP]",
"score":0.03271442651748657,
"token":2505,
"token_str":"famous"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-large-cased')
model = BertModel.from_pretrained("bert-large-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-large-cased')
model = TFBertModel.from_pretrained("bert-large-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-large-cased')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] The man worked as a doctor. [SEP]",
"score":0.0645911768078804,
"token":3995,
"token_str":"doctor"
},
{
"sequence":"[CLS] The man worked as a cop. [SEP]",
"score":0.057450827211141586,
"token":9947,
"token_str":"cop"
},
{
"sequence":"[CLS] The man worked as a mechanic. [SEP]",
"score":0.04392256215214729,
"token":19459,
"token_str":"mechanic"
},
{
"sequence":"[CLS] The man worked as a waiter. [SEP]",
"score":0.03755280375480652,
"token":17989,
"token_str":"waiter"
},
{
"sequence":"[CLS] The man worked as a teacher. [SEP]",
"score":0.03458863124251366,
"token":3218,
"token_str":"teacher"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] The woman worked as a nurse. [SEP]",
"score":0.2572779953479767,
"token":7439,
"token_str":"nurse"
},
{
"sequence":"[CLS] The woman worked as a waitress. [SEP]",
"score":0.16706500947475433,
"token":15098,
"token_str":"waitress"
},
{
"sequence":"[CLS] The woman worked as a teacher. [SEP]",
"score":0.04587847739458084,
"token":3218,
"token_str":"teacher"
},
{
"sequence":"[CLS] The woman worked as a secretary. [SEP]",
"score":0.03577028587460518,
"token":4848,
"token_str":"secretary"
},
{
"sequence":"[CLS] The woman worked as a maid. [SEP]",
"score":0.03298963978886604,
"token":13487,
"token_str":"maid"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Model | SQUAD 1.1 F1/EM | Multi NLI Accuracy
---------------------------------------- | :-------------: | :----------------:
BERT-Large, Cased (Original) | 91.5/84.8 | 86.09
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | google-bert/bert-large-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1810.04805"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| BERT large model (cased)
========================
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is cased: it makes a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
* 24-layer
* 1024 hidden dimension
* 16 attention heads
* 336M parameters.
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \(\beta\_{1} = 0.9\) and \(\beta\_{2} = 0.999\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
Evaluation results
------------------
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:",
"### BibTeX entry and citation info"
] |
question-answering | transformers |
# BERT large model (uncased) whole word masking finetuned on SQuAD
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
The training is identical -- each masked WordPiece token is predicted independently.
After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters.
## Intended uses & limitations
This model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the [task summary](https://huggingface.co/transformers/task_summary.html#extractive-question-answering) of the transformers documentation.## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### Fine-tuning
After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command:
```
python -m torch.distributed.launch --nproc_per_node=8 ./examples/question-answering/run_qa.py \
--model_name_or_path bert-large-uncased-whole-word-masking \
--dataset_name squad \
--do_train \
--do_eval \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./examples/models/wwm_uncased_finetuned_squad/ \
--per_device_eval_batch_size=3 \
--per_device_train_batch_size=3 \
```
## Evaluation results
The results obtained are the following:
```
f1 = 93.15
exact_match = 86.91
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | google-bert/bert-large-uncased-whole-word-masking-finetuned-squad | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1810.04805"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #question-answering #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
# BERT large model (uncased) whole word masking finetuned on SQuAD
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
The training is identical -- each masked WordPiece token is predicted independently.
After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters.
## Intended uses & limitations
This model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the task summary of the transformers documentation.## Training data
The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### Fine-tuning
After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command:
## Evaluation results
The results obtained are the following:
### BibTeX entry and citation info
| [
"# BERT large model (uncased) whole word masking finetuned on SQuAD\n\nPretrained model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDifferently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.\n\nThe training is identical -- each masked WordPiece token is predicted independently. \n\nAfter pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning.\n\nDisclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by\nthe Hugging Face team.",
"## Model description\n\nBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.\n\nThis model has the following configuration:\n\n- 24-layer\n- 1024 hidden dimension\n- 16 attention heads\n- 336M parameters.",
"## Intended uses & limitations\nThis model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the task summary of the transformers documentation.## Training data\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### Fine-tuning\n\nAfter pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command:",
"## Evaluation results\n\nThe results obtained are the following:",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #question-answering #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"# BERT large model (uncased) whole word masking finetuned on SQuAD\n\nPretrained model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDifferently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.\n\nThe training is identical -- each masked WordPiece token is predicted independently. \n\nAfter pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning.\n\nDisclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by\nthe Hugging Face team.",
"## Model description\n\nBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.\n\nThis model has the following configuration:\n\n- 24-layer\n- 1024 hidden dimension\n- 16 attention heads\n- 336M parameters.",
"## Intended uses & limitations\nThis model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the task summary of the transformers documentation.## Training data\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### Fine-tuning\n\nAfter pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command:",
"## Evaluation results\n\nThe results obtained are the following:",
"### BibTeX entry and citation info"
] |
fill-mask | transformers |
# BERT large model (uncased) whole word masking
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
The training is identical -- each masked WordPiece token is predicted independently.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-large-uncased-whole-word-masking')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.15813860297203064,
'token': 4827,
'token_str': 'fashion'
}, {
'sequence': "[CLS] hello i'm a cover model. [SEP]",
'score': 0.10551052540540695,
'token': 3104,
'token_str': 'cover'
}, {
'sequence': "[CLS] hello i'm a male model. [SEP]",
'score': 0.08340442180633545,
'token': 3287,
'token_str': 'male'
}, {
'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.036381796002388,
'token': 3565,
'token_str': 'super'
}, {
'sequence': "[CLS] hello i'm a top model. [SEP]",
'score': 0.03609578311443329,
'token': 2327,
'token_str': 'top'
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking')
model = BertModel.from_pretrained("bert-large-uncased-whole-word-masking")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking')
model = TFBertModel.from_pretrained("bert-large-uncased-whole-word-masking")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-large-uncased-whole-word-masking')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a waiter. [SEP]",
"score":0.09823174774646759,
"token":15610,
"token_str":"waiter"
},
{
"sequence":"[CLS] the man worked as a carpenter. [SEP]",
"score":0.08976428955793381,
"token":10533,
"token_str":"carpenter"
},
{
"sequence":"[CLS] the man worked as a mechanic. [SEP]",
"score":0.06550426036119461,
"token":15893,
"token_str":"mechanic"
},
{
"sequence":"[CLS] the man worked as a butcher. [SEP]",
"score":0.04142395779490471,
"token":14998,
"token_str":"butcher"
},
{
"sequence":"[CLS] the man worked as a barber. [SEP]",
"score":0.03680137172341347,
"token":13362,
"token_str":"barber"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a waitress. [SEP]",
"score":0.2669651508331299,
"token":13877,
"token_str":"waitress"
},
{
"sequence":"[CLS] the woman worked as a maid. [SEP]",
"score":0.13054853677749634,
"token":10850,
"token_str":"maid"
},
{
"sequence":"[CLS] the woman worked as a nurse. [SEP]",
"score":0.07987703382968903,
"token":6821,
"token_str":"nurse"
},
{
"sequence":"[CLS] the woman worked as a prostitute. [SEP]",
"score":0.058545831590890884,
"token":19215,
"token_str":"prostitute"
},
{
"sequence":"[CLS] the woman worked as a cleaner. [SEP]",
"score":0.03834161534905434,
"token":20133,
"token_str":"cleaner"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Model | SQUAD 1.1 F1/EM | Multi NLI Accuracy
---------------------------------------- | :-------------: | :----------------:
BERT-Large, Uncased (Whole Word Masking) | 92.8/86.7 | 87.07
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | google-bert/bert-large-uncased-whole-word-masking | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1810.04805"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| BERT large model (uncased) whole word masking
=============================================
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
The training is identical -- each masked WordPiece token is predicted independently.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
* 24-layer
* 1024 hidden dimension
* 16 attention heads
* 336M parameters.
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \(\beta\_{1} = 0.9\) and \(\beta\_{2} = 0.999\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
Evaluation results
------------------
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:",
"### BibTeX entry and citation info"
] |
fill-mask | transformers |
# BERT large model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-large-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1886913776397705,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a professional model. [SEP]",
'score': 0.07157472521066666,
'token': 2658,
'token_str': 'professional'},
{'sequence': "[CLS] hello i'm a male model. [SEP]",
'score': 0.04053466394543648,
'token': 3287,
'token_str': 'male'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.03891477733850479,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a fitness model. [SEP]",
'score': 0.03038121573626995,
'token': 10516,
'token_str': 'fitness'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-large-uncased')
model = BertModel.from_pretrained("bert-large-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-large-uncased')
model = TFBertModel.from_pretrained("bert-large-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-large-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a bartender. [SEP]',
'score': 0.10426565259695053,
'token': 15812,
'token_str': 'bartender'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.10232779383659363,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.06281787157058716,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a lawyer. [SEP]',
'score': 0.050936125218868256,
'token': 5160,
'token_str': 'lawyer'},
{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.041034240275621414,
'token': 10533,
'token_str': 'carpenter'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.28473711013793945,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.11336520314216614,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a bartender. [SEP]',
'score': 0.09574324637651443,
'token': 15812,
'token_str': 'bartender'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.06351090222597122,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a secretary. [SEP]',
'score': 0.048970773816108704,
'token': 3187,
'token_str': 'secretary'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Model | SQUAD 1.1 F1/EM | Multi NLI Accuracy
---------------------------------------- | :-------------: | :----------------:
BERT-Large, Uncased (Original) | 91.0/84.3 | 86.05
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | google-bert/bert-large-uncased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1810.04805"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #rust #safetensors #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| BERT large model (uncased)
==========================
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
* 24-layer
* 1024 hidden dimension
* 16 attention heads
* 336M parameters.
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \(\beta\_{1} = 0.9\) and \(\beta\_{2} = 0.999\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
Evaluation results
------------------
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #bert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:",
"### BibTeX entry and citation info"
] |
fill-mask | transformers |
# CamemBERT: a Tasty French Language Model
## Introduction
[CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to [Camembert Website](https://camembert-model.fr/)
## Pre-trained models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `camembert-base` | 110M | Base | OSCAR (138 GB of text) |
| `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) |
| `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) |
| `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) |
| `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) |
| `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) |
## How to use CamemBERT with HuggingFace
##### Load CamemBERT and its sub-word tokenizer :
```python
from transformers import CamembertModel, CamembertTokenizer
# You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-base-wikipedia-4gb")
camembert = CamembertModel.from_pretrained("camembert/camembert-base-wikipedia-4gb")
camembert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-base-wikipedia-4gb", tokenizer="camembert/camembert-base-wikipedia-4gb")
results = camembert_fill_mask("Le camembert est un fromage de <mask>!")
# results
#[{'sequence': '<s> Le camembert est un fromage de chèvre!</s>', 'score': 0.4937814474105835, 'token': 19370},
#{'sequence': '<s> Le camembert est un fromage de brebis!</s>', 'score': 0.06255942583084106, 'token': 30616},
#{'sequence': '<s> Le camembert est un fromage de montagne!</s>', 'score': 0.04340197145938873, 'token': 2364},
# {'sequence': '<s> Le camembert est un fromage de Noël!</s>', 'score': 0.02823255956172943, 'token': 3236},
#{'sequence': '<s> Le camembert est un fromage de vache!</s>', 'score': 0.021357402205467224, 'token': 12329}]
```
##### Extract contextual embedding features from Camembert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
# ['▁J', "'", 'aime', '▁le', '▁ca', 'member', 't', '▁!']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [5, 221, 10, 10600, 14, 8952, 10540, 75, 1114, 6]
# NB: Can be done in one step : tokenize.encode("J'aime le camembert !")
# Feed tokens to Camembert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = camembert(encoded_sentence)
# embeddings.detach()
# embeddings.size torch.Size([1, 10, 768])
#tensor([[[-0.0928, 0.0506, -0.0094, ..., -0.2388, 0.1177, -0.1302],
# [ 0.0662, 0.1030, -0.2355, ..., -0.4224, -0.0574, -0.2802],
# [-0.0729, 0.0547, 0.0192, ..., -0.1743, 0.0998, -0.2677],
# ...,
```
##### Extract contextual embedding features from all Camembert layers
```python
from transformers import CamembertConfig
# (Need to reload the model with new config)
config = CamembertConfig.from_pretrained("camembert/camembert-base-wikipedia-4gb", output_hidden_states=True)
camembert = CamembertModel.from_pretrained("camembert/camembert-base-wikipedia-4gb", config=config)
embeddings, _, all_layer_embeddings = camembert(encoded_sentence)
# all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers)
all_layer_embeddings[5]
# layer 5 contextual embedding : size torch.Size([1, 10, 768])
#tensor([[[-0.0059, -0.0227, 0.0065, ..., -0.0770, 0.0369, 0.0095],
# [ 0.2838, -0.1531, -0.3642, ..., -0.0027, -0.8502, -0.7914],
# [-0.0073, -0.0338, -0.0011, ..., 0.0533, -0.0250, -0.0061],
# ...,
```
## Authors
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
| {"language": "fr", "license": "mit", "datasets": ["oscar"]} | almanach/camembert-base | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"camembert",
"fill-mask",
"fr",
"dataset:oscar",
"arxiv:1911.03894",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1911.03894"
] | [
"fr"
] | TAGS
#transformers #pytorch #tf #safetensors #camembert #fill-mask #fr #dataset-oscar #arxiv-1911.03894 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
| CamemBERT: a Tasty French Language Model
========================================
Introduction
------------
CamemBERT is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to Camembert Website
Pre-trained models
------------------
How to use CamemBERT with HuggingFace
-------------------------------------
##### Load CamemBERT and its sub-word tokenizer :
##### Filling masks using pipeline
##### Extract contextual embedding features from Camembert output
##### Extract contextual embedding features from all Camembert layers
Authors
-------
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
If you use our work, please cite:
| [
"##### Load CamemBERT and its sub-word tokenizer :",
"##### Filling masks using pipeline",
"##### Extract contextual embedding features from Camembert output",
"##### Extract contextual embedding features from all Camembert layers\n\n\nAuthors\n-------\n\n\nCamemBERT was trained and evaluated by Louis Martin\\*, Benjamin Muller\\*, Pedro Javier Ortiz Suárez\\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.\n\n\nIf you use our work, please cite:"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #camembert #fill-mask #fr #dataset-oscar #arxiv-1911.03894 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"##### Load CamemBERT and its sub-word tokenizer :",
"##### Filling masks using pipeline",
"##### Extract contextual embedding features from Camembert output",
"##### Extract contextual embedding features from all Camembert layers\n\n\nAuthors\n-------\n\n\nCamemBERT was trained and evaluated by Louis Martin\\*, Benjamin Muller\\*, Pedro Javier Ortiz Suárez\\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.\n\n\nIf you use our work, please cite:"
] |
text-generation | transformers |
# ctrl
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The CTRL model was proposed in [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.). The model developers released a model card for CTRL, available [here](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf).
In their [model card](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf), the developers write:
> The CTRL Language Model analyzed in this card generates text conditioned on control codes that specify domain, style, topics, dates, entities, relationships between entities, plot points, and task-related behavior.
- **Developed by:** See [associated paper](https://arxiv.org/abs/1909.05858) from Salesforce Research
- **Model type:** Transformer-based language model
- **Language(s) (NLP):** Primarily English, some German, Spanish, French
- **License:** [BSD 3-Clause](https://github.com/salesforce/ctrl/blob/master/LICENSE.txt); also see [Code of Conduct](https://github.com/salesforce/ctrl)
- **Related Models:** More information needed
- **Parent Model:** More information needed
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1909.05858)
- [GitHub repo](https://github.com/salesforce/ctrl)
- [Developer Model Card](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf)
- [Blog post](https://blog.salesforceairesearch.com/introducing-a-conditional-transformer-language-model-for-controllable-generation/)
# Uses
## Direct Use
The model is a language model. The model can be used for text generation.
## Downstream Use
In their [model card](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf), the developers write that the primary intended users are general audiences and NLP Researchers, and that the primary intended uses are:
> 1. Generating artificial text in collaboration with a human, including but not limited to:
> - Creative writing
> - Automating repetitive writing tasks
> - Formatting specific text types
> - Creating contextualized marketing materials
> 2. Improvement of other NLP applications through fine-tuning (on another task or other data, e.g. fine-tuning CTRL to learn new kinds of language like product descriptions)
> 3. Enhancement in the field of natural language understanding to push towards a better understanding of artificial text generation, including how to detect it and work toward control, understanding, and potentially combating potentially negative consequences of such models.
## Out-of-Scope Use
In their [model card](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf), the developers write:
> - CTRL should not be used for generating artificial text without collaboration with a human.
> - It should not be used to make normative or prescriptive claims.
> - This software should not be used to promote or profit from:
> - violence, hate, and division;
> - environmental destruction;
> - abuse of human rights; or
> - the destruction of people's physical and mental health.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
In their [model card](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf), the developers write:
> We recognize the potential for misuse or abuse, including use by bad actors who could manipulate the system to act maliciously and generate text to influence decision-making in political, economic, and social settings. False attribution could also harm individuals, organizations, or other entities. To address these concerns, the model was evaluated internally as well as externally by third parties, including the Partnership on AI, prior to release.
> To mitigate potential misuse to the extent possible, we stripped out all detectable training data from undesirable sources. We then redteamed the model and found that negative utterances were often placed in contexts that made them identifiable as such. For example, when using the ‘News’ control code, hate speech could be embedded as part of an apology (e.g. “the politician apologized for saying [insert hateful statement]”), implying that this type of speech was negative. By pre-selecting the available control codes (omitting, for example, Instagram and Twitter from the available domains), we are able to limit the potential for misuse.
> In releasing our model, we hope to put it into the hands of researchers and prosocial actors so that they can work to control, understand, and potentially combat the negative consequences of such models. We hope that research into detecting fake news and model-generated content of all kinds will be pushed forward by CTRL. It is our belief that these models should become a common tool so researchers can design methods to guard against malicious use and so the public becomes familiar with their existence and patterns of behavior.
See the [associated paper](https://arxiv.org/pdf/1909.05858.pdf) for further discussions about the ethics of LLMs.
## Recommendations
In their [model card](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf), the developers write:
> - A recommendation to monitor and detect use will be implemented through the development of a model that will identify CTRLgenerated text.
> - A second recommendation to further screen the input into and output from the model will be implemented through the addition of a check in the CTRL interface to prohibit the insertion into the model of certain negative inputs, which will help control the output that can be generated.
> - The model is trained on a limited number of languages: primarily English and some German, Spanish, French. A recommendation for a future area of research is to train the model on more languages.
See the [CTRL-detector GitHub repo](https://github.com/salesforce/ctrl-detector) for more on the detector model.
# Training
## Training Data
In their [model card](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf), the developers write:
> This model is trained on 140 GB of text drawn from a variety of domains: Wikipedia (English, German, Spanish, and French), Project Gutenberg, submissions from 45 subreddits, OpenWebText, a large collection of news data, Amazon Reviews, Europarl and UN data from WMT (En-De, En-Es, En-Fr), question-answer pairs (no context documents) from ELI5, and the MRQA shared task, which includes Stanford Question Answering Dataset, NewsQA, TriviaQA, SearchQA, HotpotQA, and Natural Questions. See the paper for the full list of training data.
## Training Procedure
### Preprocessing
In the [associated paper](https://arxiv.org/pdf/1909.05858.pdf) the developers write:
> We learn BPE (Sennrich et al., 2015) codes and tokenize the data using fastBPE4, but we use a large vocabulary of roughly 250K tokens. This includes the sub-word tokens necessary to mitigate problems with rare words, but it also reduces the average number of tokens required to generate long text by including most common words. We use English Wikipedia and a 5% split of our collected OpenWebText data for learning BPE codes. We also introduce an unknown token so that during preprocessing we can filter out sequences that contain more than 2 unknown tokens. This, along with the compressed storage for efficient training (TFRecords) (Abadi et al., 2016), reduces our training data to 140 GB from the total 180 GB collected.
See the paper for links, references, and further details.
### Training
In the [associated paper](https://arxiv.org/pdf/1909.05858.pdf) the developers write:
> CTRL has model dimension d = 1280, inner dimension f = 8192, 48 layers, and 16 heads per layer. Dropout with probability 0.1 follows the residual connections in each layer. Token embeddings were tied with the final output embedding layer (Inan et al., 2016; Press & Wolf, 2016).
See the paper for links, references, and further details.
# Evaluation
## Testing Data, Factors & Metrics
In their [model card](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf), the developers write that model performance measures are:
> Performance evaluated on qualitative judgments by humans as to whether the control codes lead to text generated in the desired domain
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). Details are pulled from the [associated paper](https://arxiv.org/pdf/1909.05858.pdf).
- **Hardware Type:** TPU v3 Pod
- **Hours used:** Approximately 336 hours (2 weeks)
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
In the [associated paper](https://arxiv.org/pdf/1909.05858.pdf) the developers write:
> CTRL was implemented in TensorFlow (Abadi et al., 2016) and trained with a global batch size of 1024 distributed across 256 cores of a Cloud TPU v3 Pod for 800k iterations. Training took approximately 2 weeks using Adagrad (Duchi et al., 2011) with a linear warmup from 0 to 0.05 over 25k steps. The norm of gradients were clipped to 0.25 as in (Merity et al., 2017). Learning rate decay was not necessary due to the monotonic nature of the Adagrad accumulator. We compared to the Adam optimizer (Kingma & Ba, 2014) while training smaller models, but we noticed comparable convergence rates and significant memory savings with Adagrad. We also experimented with explicit memory-saving optimizers including SM3 (Anil et al., 2019), Adafactor (Shazeer & Stern, 2018), and NovoGrad (Ginsburg et al., 2019) with mixed results.
See the paper for links, references, and further details.
# Citation
**BibTeX:**
```bibtex
@article{keskarCTRL2019,
title={{CTRL - A Conditional Transformer Language Model for Controllable Generation}},
author={Keskar, Nitish Shirish and McCann, Bryan and Varshney, Lav and Xiong, Caiming and Socher, Richard},
journal={arXiv preprint arXiv:1909.05858},
year={2019}
}
```
**APA:**
- Keskar, N. S., McCann, B., Varshney, L. R., Xiong, C., & Socher, R. (2019). Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858.
# Model Card Authors
This model card was written by the team at Hugging Face, referencing the [model card](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf) released by the developers.
# How to Get Started with the Model
Use the code below to get started with the model. See the [Hugging Face ctrl docs](https://huggingface.co/docs/transformers/model_doc/ctrl) for more information.
<details>
<summary> Click to expand </summary>
```python
>>> from transformers import CTRLTokenizer, CTRLModel
>>> import torch
>>> tokenizer = CTRLTokenizer.from_pretrained("ctrl")
>>> model = CTRLModel.from_pretrained("ctrl")
>>> # CTRL was trained with control codes as the first token
>>> inputs = tokenizer("Opinion My dog is cute", return_tensors="pt")
>>> assert inputs["input_ids"][0, 0].item() in tokenizer.control_codes.values()
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
```
</details> | {"language": "en", "license": "bsd-3-clause", "pipeline_tag": "text-generation"} | Salesforce/ctrl | null | [
"transformers",
"pytorch",
"tf",
"ctrl",
"text-generation",
"en",
"arxiv:1909.05858",
"arxiv:1910.09700",
"license:bsd-3-clause",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1909.05858",
"1910.09700"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #ctrl #text-generation #en #arxiv-1909.05858 #arxiv-1910.09700 #license-bsd-3-clause #endpoints_compatible #has_space #region-us
|
# ctrl
# Table of Contents
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training
5. Evaluation
6. Environmental Impact
7. Technical Specifications
8. Citation
9. Model Card Authors
10. How To Get Started With the Model
# Model Details
## Model Description
The CTRL model was proposed in CTRL: A Conditional Transformer Language Model for Controllable Generation by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.). The model developers released a model card for CTRL, available here.
In their model card, the developers write:
> The CTRL Language Model analyzed in this card generates text conditioned on control codes that specify domain, style, topics, dates, entities, relationships between entities, plot points, and task-related behavior.
- Developed by: See associated paper from Salesforce Research
- Model type: Transformer-based language model
- Language(s) (NLP): Primarily English, some German, Spanish, French
- License: BSD 3-Clause; also see Code of Conduct
- Related Models: More information needed
- Parent Model: More information needed
- Resources for more information:
- Associated paper
- GitHub repo
- Developer Model Card
- Blog post
# Uses
## Direct Use
The model is a language model. The model can be used for text generation.
## Downstream Use
In their model card, the developers write that the primary intended users are general audiences and NLP Researchers, and that the primary intended uses are:
> 1. Generating artificial text in collaboration with a human, including but not limited to:
> - Creative writing
> - Automating repetitive writing tasks
> - Formatting specific text types
> - Creating contextualized marketing materials
> 2. Improvement of other NLP applications through fine-tuning (on another task or other data, e.g. fine-tuning CTRL to learn new kinds of language like product descriptions)
> 3. Enhancement in the field of natural language understanding to push towards a better understanding of artificial text generation, including how to detect it and work toward control, understanding, and potentially combating potentially negative consequences of such models.
## Out-of-Scope Use
In their model card, the developers write:
> - CTRL should not be used for generating artificial text without collaboration with a human.
> - It should not be used to make normative or prescriptive claims.
> - This software should not be used to promote or profit from:
> - violence, hate, and division;
> - environmental destruction;
> - abuse of human rights; or
> - the destruction of people's physical and mental health.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
In their model card, the developers write:
> We recognize the potential for misuse or abuse, including use by bad actors who could manipulate the system to act maliciously and generate text to influence decision-making in political, economic, and social settings. False attribution could also harm individuals, organizations, or other entities. To address these concerns, the model was evaluated internally as well as externally by third parties, including the Partnership on AI, prior to release.
> To mitigate potential misuse to the extent possible, we stripped out all detectable training data from undesirable sources. We then redteamed the model and found that negative utterances were often placed in contexts that made them identifiable as such. For example, when using the ‘News’ control code, hate speech could be embedded as part of an apology (e.g. “the politician apologized for saying [insert hateful statement]”), implying that this type of speech was negative. By pre-selecting the available control codes (omitting, for example, Instagram and Twitter from the available domains), we are able to limit the potential for misuse.
> In releasing our model, we hope to put it into the hands of researchers and prosocial actors so that they can work to control, understand, and potentially combat the negative consequences of such models. We hope that research into detecting fake news and model-generated content of all kinds will be pushed forward by CTRL. It is our belief that these models should become a common tool so researchers can design methods to guard against malicious use and so the public becomes familiar with their existence and patterns of behavior.
See the associated paper for further discussions about the ethics of LLMs.
## Recommendations
In their model card, the developers write:
> - A recommendation to monitor and detect use will be implemented through the development of a model that will identify CTRLgenerated text.
> - A second recommendation to further screen the input into and output from the model will be implemented through the addition of a check in the CTRL interface to prohibit the insertion into the model of certain negative inputs, which will help control the output that can be generated.
> - The model is trained on a limited number of languages: primarily English and some German, Spanish, French. A recommendation for a future area of research is to train the model on more languages.
See the CTRL-detector GitHub repo for more on the detector model.
# Training
## Training Data
In their model card, the developers write:
> This model is trained on 140 GB of text drawn from a variety of domains: Wikipedia (English, German, Spanish, and French), Project Gutenberg, submissions from 45 subreddits, OpenWebText, a large collection of news data, Amazon Reviews, Europarl and UN data from WMT (En-De, En-Es, En-Fr), question-answer pairs (no context documents) from ELI5, and the MRQA shared task, which includes Stanford Question Answering Dataset, NewsQA, TriviaQA, SearchQA, HotpotQA, and Natural Questions. See the paper for the full list of training data.
## Training Procedure
### Preprocessing
In the associated paper the developers write:
> We learn BPE (Sennrich et al., 2015) codes and tokenize the data using fastBPE4, but we use a large vocabulary of roughly 250K tokens. This includes the sub-word tokens necessary to mitigate problems with rare words, but it also reduces the average number of tokens required to generate long text by including most common words. We use English Wikipedia and a 5% split of our collected OpenWebText data for learning BPE codes. We also introduce an unknown token so that during preprocessing we can filter out sequences that contain more than 2 unknown tokens. This, along with the compressed storage for efficient training (TFRecords) (Abadi et al., 2016), reduces our training data to 140 GB from the total 180 GB collected.
See the paper for links, references, and further details.
### Training
In the associated paper the developers write:
> CTRL has model dimension d = 1280, inner dimension f = 8192, 48 layers, and 16 heads per layer. Dropout with probability 0.1 follows the residual connections in each layer. Token embeddings were tied with the final output embedding layer (Inan et al., 2016; Press & Wolf, 2016).
See the paper for links, references, and further details.
# Evaluation
## Testing Data, Factors & Metrics
In their model card, the developers write that model performance measures are:
> Performance evaluated on qualitative judgments by humans as to whether the control codes lead to text generated in the desired domain
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). Details are pulled from the associated paper.
- Hardware Type: TPU v3 Pod
- Hours used: Approximately 336 hours (2 weeks)
- Cloud Provider: GCP
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications
In the associated paper the developers write:
> CTRL was implemented in TensorFlow (Abadi et al., 2016) and trained with a global batch size of 1024 distributed across 256 cores of a Cloud TPU v3 Pod for 800k iterations. Training took approximately 2 weeks using Adagrad (Duchi et al., 2011) with a linear warmup from 0 to 0.05 over 25k steps. The norm of gradients were clipped to 0.25 as in (Merity et al., 2017). Learning rate decay was not necessary due to the monotonic nature of the Adagrad accumulator. We compared to the Adam optimizer (Kingma & Ba, 2014) while training smaller models, but we noticed comparable convergence rates and significant memory savings with Adagrad. We also experimented with explicit memory-saving optimizers including SM3 (Anil et al., 2019), Adafactor (Shazeer & Stern, 2018), and NovoGrad (Ginsburg et al., 2019) with mixed results.
See the paper for links, references, and further details.
BibTeX:
APA:
- Keskar, N. S., McCann, B., Varshney, L. R., Xiong, C., & Socher, R. (2019). Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858.
# Model Card Authors
This model card was written by the team at Hugging Face, referencing the model card released by the developers.
# How to Get Started with the Model
Use the code below to get started with the model. See the Hugging Face ctrl docs for more information.
<details>
<summary> Click to expand </summary>
</details> | [
"# ctrl",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe CTRL model was proposed in CTRL: A Conditional Transformer Language Model for Controllable Generation by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.). The model developers released a model card for CTRL, available here.\n\nIn their model card, the developers write: \n\n> The CTRL Language Model analyzed in this card generates text conditioned on control codes that specify domain, style, topics, dates, entities, relationships between entities, plot points, and task-related behavior.\n\n- Developed by: See associated paper from Salesforce Research\n- Model type: Transformer-based language model\n- Language(s) (NLP): Primarily English, some German, Spanish, French\n- License: BSD 3-Clause; also see Code of Conduct\n- Related Models: More information needed\n - Parent Model: More information needed\n- Resources for more information: \n - Associated paper\n - GitHub repo\n - Developer Model Card\n - Blog post",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for text generation.",
"## Downstream Use\n\nIn their model card, the developers write that the primary intended users are general audiences and NLP Researchers, and that the primary intended uses are:\n\n> 1. Generating artificial text in collaboration with a human, including but not limited to:\n> - Creative writing\n> - Automating repetitive writing tasks\n> - Formatting specific text types\n> - Creating contextualized marketing materials\n> 2. Improvement of other NLP applications through fine-tuning (on another task or other data, e.g. fine-tuning CTRL to learn new kinds of language like product descriptions)\n> 3. Enhancement in the field of natural language understanding to push towards a better understanding of artificial text generation, including how to detect it and work toward control, understanding, and potentially combating potentially negative consequences of such models.",
"## Out-of-Scope Use\n\nIn their model card, the developers write: \n\n> - CTRL should not be used for generating artificial text without collaboration with a human.\n> - It should not be used to make normative or prescriptive claims.\n> - This software should not be used to promote or profit from:\n> - violence, hate, and division;\n> - environmental destruction;\n> - abuse of human rights; or\n> - the destruction of people's physical and mental health.",
"# Bias, Risks, and Limitations\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.\n\nIn their model card, the developers write: \n\n> We recognize the potential for misuse or abuse, including use by bad actors who could manipulate the system to act maliciously and generate text to influence decision-making in political, economic, and social settings. False attribution could also harm individuals, organizations, or other entities. To address these concerns, the model was evaluated internally as well as externally by third parties, including the Partnership on AI, prior to release.\n\n> To mitigate potential misuse to the extent possible, we stripped out all detectable training data from undesirable sources. We then redteamed the model and found that negative utterances were often placed in contexts that made them identifiable as such. For example, when using the ‘News’ control code, hate speech could be embedded as part of an apology (e.g. “the politician apologized for saying [insert hateful statement]”), implying that this type of speech was negative. By pre-selecting the available control codes (omitting, for example, Instagram and Twitter from the available domains), we are able to limit the potential for misuse.\n\n> In releasing our model, we hope to put it into the hands of researchers and prosocial actors so that they can work to control, understand, and potentially combat the negative consequences of such models. We hope that research into detecting fake news and model-generated content of all kinds will be pushed forward by CTRL. It is our belief that these models should become a common tool so researchers can design methods to guard against malicious use and so the public becomes familiar with their existence and patterns of behavior.\n\nSee the associated paper for further discussions about the ethics of LLMs.",
"## Recommendations\n\nIn their model card, the developers write: \n\n> - A recommendation to monitor and detect use will be implemented through the development of a model that will identify CTRLgenerated text.\n> - A second recommendation to further screen the input into and output from the model will be implemented through the addition of a check in the CTRL interface to prohibit the insertion into the model of certain negative inputs, which will help control the output that can be generated.\n> - The model is trained on a limited number of languages: primarily English and some German, Spanish, French. A recommendation for a future area of research is to train the model on more languages.\n\nSee the CTRL-detector GitHub repo for more on the detector model.",
"# Training",
"## Training Data\n\nIn their model card, the developers write: \n\n> This model is trained on 140 GB of text drawn from a variety of domains: Wikipedia (English, German, Spanish, and French), Project Gutenberg, submissions from 45 subreddits, OpenWebText, a large collection of news data, Amazon Reviews, Europarl and UN data from WMT (En-De, En-Es, En-Fr), question-answer pairs (no context documents) from ELI5, and the MRQA shared task, which includes Stanford Question Answering Dataset, NewsQA, TriviaQA, SearchQA, HotpotQA, and Natural Questions. See the paper for the full list of training data.",
"## Training Procedure",
"### Preprocessing\n\nIn the associated paper the developers write: \n\n> We learn BPE (Sennrich et al., 2015) codes and tokenize the data using fastBPE4, but we use a large vocabulary of roughly 250K tokens. This includes the sub-word tokens necessary to mitigate problems with rare words, but it also reduces the average number of tokens required to generate long text by including most common words. We use English Wikipedia and a 5% split of our collected OpenWebText data for learning BPE codes. We also introduce an unknown token so that during preprocessing we can filter out sequences that contain more than 2 unknown tokens. This, along with the compressed storage for efficient training (TFRecords) (Abadi et al., 2016), reduces our training data to 140 GB from the total 180 GB collected.\n\nSee the paper for links, references, and further details.",
"### Training\n\nIn the associated paper the developers write: \n\n> CTRL has model dimension d = 1280, inner dimension f = 8192, 48 layers, and 16 heads per layer. Dropout with probability 0.1 follows the residual connections in each layer. Token embeddings were tied with the final output embedding layer (Inan et al., 2016; Press & Wolf, 2016).\n\nSee the paper for links, references, and further details.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nIn their model card, the developers write that model performance measures are: \n\n> Performance evaluated on qualitative judgments by humans as to whether the control codes lead to text generated in the desired domain",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). Details are pulled from the associated paper.\n\n- Hardware Type: TPU v3 Pod\n- Hours used: Approximately 336 hours (2 weeks)\n- Cloud Provider: GCP\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nIn the associated paper the developers write: \n\n> CTRL was implemented in TensorFlow (Abadi et al., 2016) and trained with a global batch size of 1024 distributed across 256 cores of a Cloud TPU v3 Pod for 800k iterations. Training took approximately 2 weeks using Adagrad (Duchi et al., 2011) with a linear warmup from 0 to 0.05 over 25k steps. The norm of gradients were clipped to 0.25 as in (Merity et al., 2017). Learning rate decay was not necessary due to the monotonic nature of the Adagrad accumulator. We compared to the Adam optimizer (Kingma & Ba, 2014) while training smaller models, but we noticed comparable convergence rates and significant memory savings with Adagrad. We also experimented with explicit memory-saving optimizers including SM3 (Anil et al., 2019), Adafactor (Shazeer & Stern, 2018), and NovoGrad (Ginsburg et al., 2019) with mixed results.\n\nSee the paper for links, references, and further details.\n\nBibTeX:\n\n\n\nAPA:\n- Keskar, N. S., McCann, B., Varshney, L. R., Xiong, C., & Socher, R. (2019). Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face, referencing the model card released by the developers.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model. See the Hugging Face ctrl docs for more information.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\n</details>"
] | [
"TAGS\n#transformers #pytorch #tf #ctrl #text-generation #en #arxiv-1909.05858 #arxiv-1910.09700 #license-bsd-3-clause #endpoints_compatible #has_space #region-us \n",
"# ctrl",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe CTRL model was proposed in CTRL: A Conditional Transformer Language Model for Controllable Generation by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.). The model developers released a model card for CTRL, available here.\n\nIn their model card, the developers write: \n\n> The CTRL Language Model analyzed in this card generates text conditioned on control codes that specify domain, style, topics, dates, entities, relationships between entities, plot points, and task-related behavior.\n\n- Developed by: See associated paper from Salesforce Research\n- Model type: Transformer-based language model\n- Language(s) (NLP): Primarily English, some German, Spanish, French\n- License: BSD 3-Clause; also see Code of Conduct\n- Related Models: More information needed\n - Parent Model: More information needed\n- Resources for more information: \n - Associated paper\n - GitHub repo\n - Developer Model Card\n - Blog post",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for text generation.",
"## Downstream Use\n\nIn their model card, the developers write that the primary intended users are general audiences and NLP Researchers, and that the primary intended uses are:\n\n> 1. Generating artificial text in collaboration with a human, including but not limited to:\n> - Creative writing\n> - Automating repetitive writing tasks\n> - Formatting specific text types\n> - Creating contextualized marketing materials\n> 2. Improvement of other NLP applications through fine-tuning (on another task or other data, e.g. fine-tuning CTRL to learn new kinds of language like product descriptions)\n> 3. Enhancement in the field of natural language understanding to push towards a better understanding of artificial text generation, including how to detect it and work toward control, understanding, and potentially combating potentially negative consequences of such models.",
"## Out-of-Scope Use\n\nIn their model card, the developers write: \n\n> - CTRL should not be used for generating artificial text without collaboration with a human.\n> - It should not be used to make normative or prescriptive claims.\n> - This software should not be used to promote or profit from:\n> - violence, hate, and division;\n> - environmental destruction;\n> - abuse of human rights; or\n> - the destruction of people's physical and mental health.",
"# Bias, Risks, and Limitations\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.\n\nIn their model card, the developers write: \n\n> We recognize the potential for misuse or abuse, including use by bad actors who could manipulate the system to act maliciously and generate text to influence decision-making in political, economic, and social settings. False attribution could also harm individuals, organizations, or other entities. To address these concerns, the model was evaluated internally as well as externally by third parties, including the Partnership on AI, prior to release.\n\n> To mitigate potential misuse to the extent possible, we stripped out all detectable training data from undesirable sources. We then redteamed the model and found that negative utterances were often placed in contexts that made them identifiable as such. For example, when using the ‘News’ control code, hate speech could be embedded as part of an apology (e.g. “the politician apologized for saying [insert hateful statement]”), implying that this type of speech was negative. By pre-selecting the available control codes (omitting, for example, Instagram and Twitter from the available domains), we are able to limit the potential for misuse.\n\n> In releasing our model, we hope to put it into the hands of researchers and prosocial actors so that they can work to control, understand, and potentially combat the negative consequences of such models. We hope that research into detecting fake news and model-generated content of all kinds will be pushed forward by CTRL. It is our belief that these models should become a common tool so researchers can design methods to guard against malicious use and so the public becomes familiar with their existence and patterns of behavior.\n\nSee the associated paper for further discussions about the ethics of LLMs.",
"## Recommendations\n\nIn their model card, the developers write: \n\n> - A recommendation to monitor and detect use will be implemented through the development of a model that will identify CTRLgenerated text.\n> - A second recommendation to further screen the input into and output from the model will be implemented through the addition of a check in the CTRL interface to prohibit the insertion into the model of certain negative inputs, which will help control the output that can be generated.\n> - The model is trained on a limited number of languages: primarily English and some German, Spanish, French. A recommendation for a future area of research is to train the model on more languages.\n\nSee the CTRL-detector GitHub repo for more on the detector model.",
"# Training",
"## Training Data\n\nIn their model card, the developers write: \n\n> This model is trained on 140 GB of text drawn from a variety of domains: Wikipedia (English, German, Spanish, and French), Project Gutenberg, submissions from 45 subreddits, OpenWebText, a large collection of news data, Amazon Reviews, Europarl and UN data from WMT (En-De, En-Es, En-Fr), question-answer pairs (no context documents) from ELI5, and the MRQA shared task, which includes Stanford Question Answering Dataset, NewsQA, TriviaQA, SearchQA, HotpotQA, and Natural Questions. See the paper for the full list of training data.",
"## Training Procedure",
"### Preprocessing\n\nIn the associated paper the developers write: \n\n> We learn BPE (Sennrich et al., 2015) codes and tokenize the data using fastBPE4, but we use a large vocabulary of roughly 250K tokens. This includes the sub-word tokens necessary to mitigate problems with rare words, but it also reduces the average number of tokens required to generate long text by including most common words. We use English Wikipedia and a 5% split of our collected OpenWebText data for learning BPE codes. We also introduce an unknown token so that during preprocessing we can filter out sequences that contain more than 2 unknown tokens. This, along with the compressed storage for efficient training (TFRecords) (Abadi et al., 2016), reduces our training data to 140 GB from the total 180 GB collected.\n\nSee the paper for links, references, and further details.",
"### Training\n\nIn the associated paper the developers write: \n\n> CTRL has model dimension d = 1280, inner dimension f = 8192, 48 layers, and 16 heads per layer. Dropout with probability 0.1 follows the residual connections in each layer. Token embeddings were tied with the final output embedding layer (Inan et al., 2016; Press & Wolf, 2016).\n\nSee the paper for links, references, and further details.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nIn their model card, the developers write that model performance measures are: \n\n> Performance evaluated on qualitative judgments by humans as to whether the control codes lead to text generated in the desired domain",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). Details are pulled from the associated paper.\n\n- Hardware Type: TPU v3 Pod\n- Hours used: Approximately 336 hours (2 weeks)\n- Cloud Provider: GCP\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nIn the associated paper the developers write: \n\n> CTRL was implemented in TensorFlow (Abadi et al., 2016) and trained with a global batch size of 1024 distributed across 256 cores of a Cloud TPU v3 Pod for 800k iterations. Training took approximately 2 weeks using Adagrad (Duchi et al., 2011) with a linear warmup from 0 to 0.05 over 25k steps. The norm of gradients were clipped to 0.25 as in (Merity et al., 2017). Learning rate decay was not necessary due to the monotonic nature of the Adagrad accumulator. We compared to the Adam optimizer (Kingma & Ba, 2014) while training smaller models, but we noticed comparable convergence rates and significant memory savings with Adagrad. We also experimented with explicit memory-saving optimizers including SM3 (Anil et al., 2019), Adafactor (Shazeer & Stern, 2018), and NovoGrad (Ginsburg et al., 2019) with mixed results.\n\nSee the paper for links, references, and further details.\n\nBibTeX:\n\n\n\nAPA:\n- Keskar, N. S., McCann, B., Varshney, L. R., Xiong, C., & Socher, R. (2019). Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face, referencing the model card released by the developers.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model. See the Hugging Face ctrl docs for more information.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\n</details>"
] |
question-answering | transformers |
# DistilBERT base cased distilled SQuAD
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
## Model Details
**Model Description:** The DistilBERT model was proposed in the blog post [Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5), and the paper [DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108). DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than *bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark.
This model is a fine-tune checkpoint of [DistilBERT-base-cased](https://huggingface.co/distilbert-base-cased), fine-tuned using (a second step of) knowledge distillation on [SQuAD v1.1](https://huggingface.co/datasets/squad).
- **Developed by:** Hugging Face
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** Apache 2.0
- **Related Models:** [DistilBERT-base-cased](https://huggingface.co/distilbert-base-cased)
- **Resources for more information:**
- See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including this model)
- See [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure
## How to Get Started with the Model
Use the code below to get started with the model.
```python
>>> from transformers import pipeline
>>> question_answerer = pipeline("question-answering", model='distilbert-base-cased-distilled-squad')
>>> context = r"""
... Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
... question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
... a model on a SQuAD task, you may leverage the examples/pytorch/question-answering/run_squad.py script.
... """
>>> result = question_answerer(question="What is a good example of a question answering dataset?", context=context)
>>> print(
... f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}"
...)
Answer: 'SQuAD dataset', score: 0.5152, start: 147, end: 160
```
Here is how to use this model in PyTorch:
```python
from transformers import DistilBertTokenizer, DistilBertModel
import torch
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-cased-distilled-squad')
model = DistilBertModel.from_pretrained('distilbert-base-cased-distilled-squad')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
print(outputs)
```
And in TensorFlow:
```python
from transformers import DistilBertTokenizer, TFDistilBertForQuestionAnswering
import tensorflow as tf
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-cased-distilled-squad")
model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-cased-distilled-squad")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens)
```
## Uses
This model can be used for question answering.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline
>>> question_answerer = pipeline("question-answering", model='distilbert-base-cased-distilled-squad')
>>> context = r"""
... Alice is sitting on the bench. Bob is sitting next to her.
... """
>>> result = question_answerer(question="Who is the CEO?", context=context)
>>> print(
... f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}"
...)
Answer: 'Bob', score: 0.7527, start: 32, end: 35
```
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The [distilbert-base-cased model](https://huggingface.co/distilbert-base-cased) was trained using the same data as the [distilbert-base-uncased model](https://huggingface.co/distilbert-base-uncased). The [distilbert-base-uncased model](https://huggingface.co/distilbert-base-uncased) model describes it's training data as:
> DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers).
To learn more about the SQuAD v1.1 dataset, see the [SQuAD v1.1 data card](https://huggingface.co/datasets/squad).
#### Training Procedure
##### Preprocessing
See the [distilbert-base-cased model card](https://huggingface.co/distilbert-base-cased) for further details.
##### Pretraining
See the [distilbert-base-cased model card](https://huggingface.co/distilbert-base-cased) for further details.
## Evaluation
As discussed in the [model repository](https://github.com/huggingface/transformers/blob/main/examples/research_projects/distillation/README.md)
> This model reaches a F1 score of 87.1 on the [SQuAD v1.1] dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7).
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and hours used based on the [associated paper](https://arxiv.org/pdf/1910.01108.pdf). Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD.
- **Hardware Type:** 8 16GB V100 GPUs
- **Hours used:** 90 hours
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://arxiv.org/abs/1910.01108) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@inproceedings{sanh2019distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas},
booktitle={NeurIPS EMC^2 Workshop},
year={2019}
}
```
APA:
- Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
## Model Card Authors
This model card was written by the Hugging Face team.
| {"language": "en", "license": "apache-2.0", "datasets": ["squad"], "metrics": ["squad"], "model-index": [{"name": "distilbert-base-cased-distilled-squad", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad", "type": "squad", "config": "plain_text", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 79.5998, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTViZDA2Y2E2NjUyMjNjYjkzNTUzODc5OTk2OTNkYjQxMDRmMDhlYjdmYWJjYWQ2N2RlNzY1YmI3OWY1NmRhOSIsInZlcnNpb24iOjF9.ZJHhboAMwsi3pqU-B-XKRCYP_tzpCRb8pEjGr2Oc-TteZeoWHI8CXcpDxugfC3f7d_oBcKWLzh3CClQxBW1iAQ"}, {"type": "f1", "value": 86.9965, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWZlMzY2MmE1NDNhOGNjNWRmODg0YjQ2Zjk5MjUzZDQ2MDYxOTBlMTNhNzQ4NTA2NjRmNDU3MGIzMTYwMmUyOSIsInZlcnNpb24iOjF9.z0ZDir87aT7UEmUeDm8Uw0oUdAqzlBz343gwnsQP3YLfGsaHe-jGlhco0Z7ISUd9NokyCiJCRc4NNxJQ83IuCw"}]}]}]} | distilbert/distilbert-base-cased-distilled-squad | null | [
"transformers",
"pytorch",
"tf",
"rust",
"safetensors",
"openvino",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1910.01108",
"1910.09700"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #rust #safetensors #openvino #distilbert #question-answering #en #dataset-squad #arxiv-1910.01108 #arxiv-1910.09700 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# DistilBERT base cased distilled SQuAD
## Table of Contents
- Model Details
- How To Get Started With the Model
- Uses
- Risks, Limitations and Biases
- Training
- Evaluation
- Environmental Impact
- Technical Specifications
- Citation Information
- Model Card Authors
## Model Details
Model Description: The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT, and the paper DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter. DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than *bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark.
This model is a fine-tune checkpoint of DistilBERT-base-cased, fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1.
- Developed by: Hugging Face
- Model Type: Transformer-based language model
- Language(s): English
- License: Apache 2.0
- Related Models: DistilBERT-base-cased
- Resources for more information:
- See this repository for more about Distil\* (a class of compressed models including this model)
- See Sanh et al. (2019) for more information about knowledge distillation and the training procedure
## How to Get Started with the Model
Use the code below to get started with the model.
Here is how to use this model in PyTorch:
And in TensorFlow:
## Uses
This model can be used for question answering.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The distilbert-base-cased model was trained using the same data as the distilbert-base-uncased model. The distilbert-base-uncased model model describes it's training data as:
> DistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).
To learn more about the SQuAD v1.1 dataset, see the SQuAD v1.1 data card.
#### Training Procedure
##### Preprocessing
See the distilbert-base-cased model card for further details.
##### Pretraining
See the distilbert-base-cased model card for further details.
## Evaluation
As discussed in the model repository
> This model reaches a F1 score of 87.1 on the [SQuAD v1.1] dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7).
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and hours used based on the associated paper. Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD.
- Hardware Type: 8 16GB V100 GPUs
- Hours used: 90 hours
- Cloud Provider: Unknown
- Compute Region: Unknown
- Carbon Emitted: Unknown
## Technical Specifications
See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.
APA:
- Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
## Model Card Authors
This model card was written by the Hugging Face team.
| [
"# DistilBERT base cased distilled SQuAD",
"## Table of Contents\n- Model Details\n- How To Get Started With the Model\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- Technical Specifications\n- Citation Information\n- Model Card Authors",
"## Model Details\n\nModel Description: The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT, and the paper DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter. DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than *bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark.\n\nThis model is a fine-tune checkpoint of DistilBERT-base-cased, fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1. \n\n- Developed by: Hugging Face\n- Model Type: Transformer-based language model\n- Language(s): English \n- License: Apache 2.0\n- Related Models: DistilBERT-base-cased\n- Resources for more information:\n - See this repository for more about Distil\\* (a class of compressed models including this model)\n - See Sanh et al. (2019) for more information about knowledge distillation and the training procedure",
"## How to Get Started with the Model \n\nUse the code below to get started with the model. \n\n\n\nHere is how to use this model in PyTorch:\n\n\n\nAnd in TensorFlow:",
"## Uses\n\nThis model can be used for question answering.",
"#### Misuse and Out-of-scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.",
"## Risks, Limitations and Biases\n\nCONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"## Training",
"#### Training Data\n\nThe distilbert-base-cased model was trained using the same data as the distilbert-base-uncased model. The distilbert-base-uncased model model describes it's training data as: \n\n> DistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).\n\nTo learn more about the SQuAD v1.1 dataset, see the SQuAD v1.1 data card.",
"#### Training Procedure",
"##### Preprocessing\n\nSee the distilbert-base-cased model card for further details.",
"##### Pretraining\n\nSee the distilbert-base-cased model card for further details.",
"## Evaluation\n\nAs discussed in the model repository\n\n> This model reaches a F1 score of 87.1 on the [SQuAD v1.1] dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7).",
"## Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and hours used based on the associated paper. Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD.\n\n- Hardware Type: 8 16GB V100 GPUs\n- Hours used: 90 hours\n- Cloud Provider: Unknown\n- Compute Region: Unknown\n- Carbon Emitted: Unknown",
"## Technical Specifications\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\n\n\n\nAPA: \n- Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"## Model Card Authors\n\nThis model card was written by the Hugging Face team."
] | [
"TAGS\n#transformers #pytorch #tf #rust #safetensors #openvino #distilbert #question-answering #en #dataset-squad #arxiv-1910.01108 #arxiv-1910.09700 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# DistilBERT base cased distilled SQuAD",
"## Table of Contents\n- Model Details\n- How To Get Started With the Model\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- Technical Specifications\n- Citation Information\n- Model Card Authors",
"## Model Details\n\nModel Description: The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT, and the paper DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter. DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than *bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark.\n\nThis model is a fine-tune checkpoint of DistilBERT-base-cased, fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1. \n\n- Developed by: Hugging Face\n- Model Type: Transformer-based language model\n- Language(s): English \n- License: Apache 2.0\n- Related Models: DistilBERT-base-cased\n- Resources for more information:\n - See this repository for more about Distil\\* (a class of compressed models including this model)\n - See Sanh et al. (2019) for more information about knowledge distillation and the training procedure",
"## How to Get Started with the Model \n\nUse the code below to get started with the model. \n\n\n\nHere is how to use this model in PyTorch:\n\n\n\nAnd in TensorFlow:",
"## Uses\n\nThis model can be used for question answering.",
"#### Misuse and Out-of-scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.",
"## Risks, Limitations and Biases\n\nCONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"## Training",
"#### Training Data\n\nThe distilbert-base-cased model was trained using the same data as the distilbert-base-uncased model. The distilbert-base-uncased model model describes it's training data as: \n\n> DistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).\n\nTo learn more about the SQuAD v1.1 dataset, see the SQuAD v1.1 data card.",
"#### Training Procedure",
"##### Preprocessing\n\nSee the distilbert-base-cased model card for further details.",
"##### Pretraining\n\nSee the distilbert-base-cased model card for further details.",
"## Evaluation\n\nAs discussed in the model repository\n\n> This model reaches a F1 score of 87.1 on the [SQuAD v1.1] dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7).",
"## Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and hours used based on the associated paper. Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD.\n\n- Hardware Type: 8 16GB V100 GPUs\n- Hours used: 90 hours\n- Cloud Provider: Unknown\n- Compute Region: Unknown\n- Carbon Emitted: Unknown",
"## Technical Specifications\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\n\n\n\nAPA: \n- Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"## Model Card Authors\n\nThis model card was written by the Hugging Face team."
] |
fill-mask | transformers |
# Model Card for DistilBERT base model (cased)
This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-cased).
It was introduced in [this paper](https://arxiv.org/abs/1910.01108).
The code for the distillation process can be found
[here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation).
This model is cased: it does make a difference between english and English.
All the training details on the pre-training, the uses, limitations and potential biases (included below) are the same as for [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased).
We highly encourage to check it if you want to know more.
## Model description
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained
with three objectives:
- Distillation loss: the model was trained to return the same probabilities as the BERT base model.
- Masked language modeling (MLM): this is part of the original training loss of the BERT base model. When taking a
sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the
model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that
usually see the words one after the other, or from autoregressive models like GPT which internally mask the future
tokens. It allows the model to learn a bidirectional representation of the sentence.
- Cosine embedding loss: the model was also trained to generate hidden states as close as possible as the BERT base
model.
This way, the model learns the same inner representation of the English language than its teacher model, while being
faster for inference or downstream tasks.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=distilbert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.05292855575680733,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.03968575969338417,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a business model. [SEP]",
'score': 0.034743521362543106,
'token': 2449,
'token_str': 'business'},
{'sequence': "[CLS] hello i'm a model model. [SEP]",
'score': 0.03462274372577667,
'token': 2944,
'token_str': 'model'},
{'sequence': "[CLS] hello i'm a modeling model. [SEP]",
'score': 0.018145186826586723,
'token': 11643,
'token_str': 'modeling'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import DistilBertTokenizer, DistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertModel.from_pretrained("distilbert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import DistilBertTokenizer, TFDistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = TFDistilBertModel.from_pretrained("distilbert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
[the bias of its teacher model](https://huggingface.co/bert-base-uncased#limitations-and-bias).
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
>>> unmasker("The White man worked as a [MASK].")
[{'sequence': '[CLS] the white man worked as a blacksmith. [SEP]',
'score': 0.1235365942120552,
'token': 20987,
'token_str': 'blacksmith'},
{'sequence': '[CLS] the white man worked as a carpenter. [SEP]',
'score': 0.10142576694488525,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the white man worked as a farmer. [SEP]',
'score': 0.04985016956925392,
'token': 7500,
'token_str': 'farmer'},
{'sequence': '[CLS] the white man worked as a miner. [SEP]',
'score': 0.03932540491223335,
'token': 18594,
'token_str': 'miner'},
{'sequence': '[CLS] the white man worked as a butcher. [SEP]',
'score': 0.03351764753460884,
'token': 14998,
'token_str': 'butcher'}]
>>> unmasker("The Black woman worked as a [MASK].")
[{'sequence': '[CLS] the black woman worked as a waitress. [SEP]',
'score': 0.13283951580524445,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the black woman worked as a nurse. [SEP]',
'score': 0.12586183845996857,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the black woman worked as a maid. [SEP]',
'score': 0.11708822101354599,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the black woman worked as a prostitute. [SEP]',
'score': 0.11499975621700287,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the black woman worked as a housekeeper. [SEP]',
'score': 0.04722772538661957,
'token': 22583,
'token_str': 'housekeeper'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset
consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia)
(excluding lists, tables and headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 16 GB V100 for 90 hours. See the
[training code](https://github.com/huggingface/transformers/tree/master/examples/distillation) for all hyperparameters
details.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
| | 81.5 | 87.8 | 88.2 | 90.4 | 47.2 | 85.5 | 85.6 | 60.6 |
### BibTeX entry and citation info
```bibtex
@article{Sanh2019DistilBERTAD,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
journal={ArXiv},
year={2019},
volume={abs/1910.01108}
}
```
<a href="https://huggingface.co/exbert/?model=distilbert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | distilbert/distilbert-base-cased | null | [
"transformers",
"pytorch",
"tf",
"onnx",
"safetensors",
"distilbert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1910.01108"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #onnx #safetensors #distilbert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| Model Card for DistilBERT base model (cased)
============================================
This model is a distilled version of the BERT base model.
It was introduced in this paper.
The code for the distillation process can be found
here.
This model is cased: it does make a difference between english and English.
All the training details on the pre-training, the uses, limitations and potential biases (included below) are the same as for DistilBERT-base-uncased.
We highly encourage to check it if you want to know more.
Model description
-----------------
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained
with three objectives:
* Distillation loss: the model was trained to return the same probabilities as the BERT base model.
* Masked language modeling (MLM): this is part of the original training loss of the BERT base model. When taking a
sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the
model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that
usually see the words one after the other, or from autoregressive models like GPT which internally mask the future
tokens. It allows the model to learn a bidirectional representation of the sentence.
* Cosine embedding loss: the model was also trained to generate hidden states as close as possible as the BERT base
model.
This way, the model learns the same inner representation of the English language than its teacher model, while being
faster for inference or downstream tasks.
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
the bias of its teacher model.
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
DistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset
consisting of 11,038 unpublished books and English Wikipedia
(excluding lists, tables and headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 16 GB V100 for 90 hours. See the
training code for all hyperparameters
details.
Evaluation results
------------------
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. It also inherits some of\nthe bias of its teacher model.\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nDistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset\nconsisting of 11,038 unpublished books and English Wikipedia\n(excluding lists, tables and headers).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 8 16 GB V100 for 90 hours. See the\ntraining code for all hyperparameters\ndetails.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nGlue test results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] | [
"TAGS\n#transformers #pytorch #tf #onnx #safetensors #distilbert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. It also inherits some of\nthe bias of its teacher model.\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nDistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset\nconsisting of 11,038 unpublished books and English Wikipedia\n(excluding lists, tables and headers).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 8 16 GB V100 for 90 hours. See the\ntraining code for all hyperparameters\ndetails.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nGlue test results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] |
fill-mask | transformers | ## distilbert-base-german-cased
| {"language": "de", "license": "apache-2.0"} | distilbert/distilbert-base-german-cased | null | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"de"
] | TAGS
#transformers #pytorch #safetensors #distilbert #fill-mask #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ## distilbert-base-german-cased
| [
"## distilbert-base-german-cased"
] | [
"TAGS\n#transformers #pytorch #safetensors #distilbert #fill-mask #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## distilbert-base-german-cased"
] |
fill-mask | transformers |
# Model Card for DistilBERT base multilingual (cased)
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Citation](#citation)
8. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
This model is a distilled version of the [BERT base multilingual model](https://huggingface.co/bert-base-multilingual-cased/). The code for the distillation process can be found [here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation). This model is cased: it does make a difference between english and English.
The model is trained on the concatenation of Wikipedia in 104 different languages listed [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages).
The model has 6 layers, 768 dimension and 12 heads, totalizing 134M parameters (compared to 177M parameters for mBERT-base).
On average, this model, referred to as DistilmBERT, is twice as fast as mBERT-base.
We encourage potential users of this model to check out the [BERT base multilingual model card](https://huggingface.co/bert-base-multilingual-cased) to learn more about usage, limitations and potential biases.
- **Developed by:** Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf (Hugging Face)
- **Model type:** Transformer-based language model
- **Language(s) (NLP):** 104 languages; see full list [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages)
- **License:** Apache 2.0
- **Related Models:** [BERT base multilingual model](https://huggingface.co/bert-base-multilingual-cased)
- **Resources for more information:**
- [GitHub Repository](https://github.com/huggingface/transformers/blob/main/examples/research_projects/distillation/README.md)
- [Associated Paper](https://arxiv.org/abs/1910.01108)
# Uses
## Direct Use and Downstream Use
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2.
## Out of Scope Use
The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training Details
- The model was pretrained with the supervision of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the concatenation of Wikipedia in 104 different languages
- The model has 6 layers, 768 dimension and 12 heads, totalizing 134M parameters.
- Further information about the training procedure and data is included in the [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) model card.
# Evaluation
The model developers report the following accuracy results for DistilmBERT (see [GitHub Repo](https://github.com/huggingface/transformers/blob/main/examples/research_projects/distillation/README.md)):
> Here are the results on the test sets for 6 of the languages available in XNLI. The results are computed in the zero shot setting (trained on the English portion and evaluated on the target language portion):
| Model | English | Spanish | Chinese | German | Arabic | Urdu |
| :---: | :---: | :---: | :---: | :---: | :---: | :---:|
| mBERT base cased (computed) | 82.1 | 74.6 | 69.1 | 72.3 | 66.4 | 58.5 |
| mBERT base uncased (reported)| 81.4 | 74.3 | 63.8 | 70.5 | 62.1 | 58.3 |
| DistilmBERT | 78.2 | 69.1 | 64.0 | 66.3 | 59.1 | 54.7 |
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
```bibtex
@article{Sanh2019DistilBERTAD,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
journal={ArXiv},
year={2019},
volume={abs/1910.01108}
}
```
APA
- Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
# How to Get Started With the Model
You can use the model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilbert-base-multilingual-cased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'score': 0.040800247341394424,
'sequence': "Hello I'm a virtual model.",
'token': 37859,
'token_str': 'virtual'},
{'score': 0.020015988498926163,
'sequence': "Hello I'm a big model.",
'token': 22185,
'token_str': 'big'},
{'score': 0.018680453300476074,
'sequence': "Hello I'm a Hello model.",
'token': 31178,
'token_str': 'Hello'},
{'score': 0.017396586015820503,
'sequence': "Hello I'm a model model.",
'token': 13192,
'token_str': 'model'},
{'score': 0.014229810796678066,
'sequence': "Hello I'm a perfect model.",
'token': 43477,
'token_str': 'perfect'}]
```
| {"language": ["multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "mn", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "hr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "th", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo"], "license": "apache-2.0", "datasets": ["wikipedia"]} | distilbert/distilbert-base-multilingual-cased | null | [
"transformers",
"pytorch",
"tf",
"onnx",
"safetensors",
"distilbert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"mn",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"th",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1910.01108",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1910.01108",
"1910.09700"
] | [
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"mn",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"hr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"th",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo"
] | TAGS
#transformers #pytorch #tf #onnx #safetensors #distilbert #fill-mask #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds #lm #mk #mg #ms #ml #mr #mn #min #ne #new #nb #nn #oc #fa #pms #pl #pt #pa #ro #ru #sco #sr #scn #sk #sl #aze #es #su #sw #sv #tl #tg #th #ta #tt #te #tr #uk #ud #uz #vi #vo #war #cy #fry #pnb #yo #dataset-wikipedia #arxiv-1910.01108 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| Model Card for DistilBERT base multilingual (cased)
===================================================
Table of Contents
=================
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training Details
5. Evaluation
6. Environmental Impact
7. Citation
8. How To Get Started With the Model
Model Details
=============
Model Description
-----------------
This model is a distilled version of the BERT base multilingual model. The code for the distillation process can be found here. This model is cased: it does make a difference between english and English.
The model is trained on the concatenation of Wikipedia in 104 different languages listed here.
The model has 6 layers, 768 dimension and 12 heads, totalizing 134M parameters (compared to 177M parameters for mBERT-base).
On average, this model, referred to as DistilmBERT, is twice as fast as mBERT-base.
We encourage potential users of this model to check out the BERT base multilingual model card to learn more about usage, limitations and potential biases.
* Developed by: Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf (Hugging Face)
* Model type: Transformer-based language model
* Language(s) (NLP): 104 languages; see full list here
* License: Apache 2.0
* Related Models: BERT base multilingual model
* Resources for more information:
+ GitHub Repository
+ Associated Paper
Uses
====
Direct Use and Downstream Use
-----------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2.
Out of Scope Use
----------------
The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
Bias, Risks, and Limitations
============================
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
Recommendations
---------------
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Training Details
================
* The model was pretrained with the supervision of bert-base-multilingual-cased on the concatenation of Wikipedia in 104 different languages
* The model has 6 layers, 768 dimension and 12 heads, totalizing 134M parameters.
* Further information about the training procedure and data is included in the bert-base-multilingual-cased model card.
Evaluation
==========
The model developers report the following accuracy results for DistilmBERT (see GitHub Repo):
>
> Here are the results on the test sets for 6 of the languages available in XNLI. The results are computed in the zero shot setting (trained on the English portion and evaluated on the target language portion):
>
>
>
Environmental Impact
====================
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
* Hardware Type: More information needed
* Hours used: More information needed
* Cloud Provider: More information needed
* Compute Region: More information needed
* Carbon Emitted: More information needed
APA
* Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
How to Get Started With the Model
=================================
You can use the model directly with a pipeline for masked language modeling:
| [] | [
"TAGS\n#transformers #pytorch #tf #onnx #safetensors #distilbert #fill-mask #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds #lm #mk #mg #ms #ml #mr #mn #min #ne #new #nb #nn #oc #fa #pms #pl #pt #pa #ro #ru #sco #sr #scn #sk #sl #aze #es #su #sw #sv #tl #tg #th #ta #tt #te #tr #uk #ud #uz #vi #vo #war #cy #fry #pnb #yo #dataset-wikipedia #arxiv-1910.01108 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
question-answering | transformers |
# DistilBERT base uncased distilled SQuAD
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
## Model Details
**Model Description:** The DistilBERT model was proposed in the blog post [Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5), and the paper [DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108). DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than *bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark.
This model is a fine-tune checkpoint of [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased), fine-tuned using (a second step of) knowledge distillation on [SQuAD v1.1](https://huggingface.co/datasets/squad).
- **Developed by:** Hugging Face
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** Apache 2.0
- **Related Models:** [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased)
- **Resources for more information:**
- See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including this model)
- See [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure
## How to Get Started with the Model
Use the code below to get started with the model.
```python
>>> from transformers import pipeline
>>> question_answerer = pipeline("question-answering", model='distilbert-base-uncased-distilled-squad')
>>> context = r"""
... Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
... question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
... a model on a SQuAD task, you may leverage the examples/pytorch/question-answering/run_squad.py script.
... """
>>> result = question_answerer(question="What is a good example of a question answering dataset?", context=context)
>>> print(
... f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}"
...)
Answer: 'SQuAD dataset', score: 0.4704, start: 147, end: 160
```
Here is how to use this model in PyTorch:
```python
from transformers import DistilBertTokenizer, DistilBertForQuestionAnswering
import torch
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased-distilled-squad')
model = DistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased-distilled-squad')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
answer_start_index = torch.argmax(outputs.start_logits)
answer_end_index = torch.argmax(outputs.end_logits)
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens)
```
And in TensorFlow:
```python
from transformers import DistilBertTokenizer, TFDistilBertForQuestionAnswering
import tensorflow as tf
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased-distilled-squad")
model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased-distilled-squad")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens)
```
## Uses
This model can be used for question answering.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline
>>> question_answerer = pipeline("question-answering", model='distilbert-base-uncased-distilled-squad')
>>> context = r"""
... Alice is sitting on the bench. Bob is sitting next to her.
... """
>>> result = question_answerer(question="Who is the CEO?", context=context)
>>> print(
... f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}"
...)
Answer: 'Bob', score: 0.4183, start: 32, end: 35
```
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The [distilbert-base-uncased model](https://huggingface.co/distilbert-base-uncased) model describes it's training data as:
> DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers).
To learn more about the SQuAD v1.1 dataset, see the [SQuAD v1.1 data card](https://huggingface.co/datasets/squad).
#### Training Procedure
##### Preprocessing
See the [distilbert-base-uncased model card](https://huggingface.co/distilbert-base-uncased) for further details.
##### Pretraining
See the [distilbert-base-uncased model card](https://huggingface.co/distilbert-base-uncased) for further details.
## Evaluation
As discussed in the [model repository](https://github.com/huggingface/transformers/blob/main/examples/research_projects/distillation/README.md)
> This model reaches a F1 score of 86.9 on the [SQuAD v1.1] dev set (for comparison, Bert bert-base-uncased version reaches a F1 score of 88.5).
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and hours used based on the [associated paper](https://arxiv.org/pdf/1910.01108.pdf). Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD.
- **Hardware Type:** 8 16GB V100 GPUs
- **Hours used:** 90 hours
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://arxiv.org/abs/1910.01108) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@inproceedings{sanh2019distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas},
booktitle={NeurIPS EMC^2 Workshop},
year={2019}
}
```
APA:
- Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
## Model Card Authors
This model card was written by the Hugging Face team.
| {"language": "en", "license": "apache-2.0", "datasets": ["squad"], "widget": [{"text": "Which name is also used to describe the Amazon rainforest in English?", "context": "The Amazon rainforest (Portuguese: Floresta Amaz\u00f4nica or Amaz\u00f4nia; Spanish: Selva Amaz\u00f3nica, Amazon\u00eda or usually Amazonia; French: For\u00eat amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."}, {"text": "How many square kilometers of rainforest is covered in the basin?", "context": "The Amazon rainforest (Portuguese: Floresta Amaz\u00f4nica or Amaz\u00f4nia; Spanish: Selva Amaz\u00f3nica, Amazon\u00eda or usually Amazonia; French: For\u00eat amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."}]} | distilbert/distilbert-base-uncased-distilled-squad | null | [
"transformers",
"pytorch",
"tf",
"tflite",
"coreml",
"safetensors",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1910.01108",
"1910.09700"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #tflite #coreml #safetensors #distilbert #question-answering #en #dataset-squad #arxiv-1910.01108 #arxiv-1910.09700 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
# DistilBERT base uncased distilled SQuAD
## Table of Contents
- Model Details
- How To Get Started With the Model
- Uses
- Risks, Limitations and Biases
- Training
- Evaluation
- Environmental Impact
- Technical Specifications
- Citation Information
- Model Card Authors
## Model Details
Model Description: The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT, and the paper DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter. DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than *bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark.
This model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1.
- Developed by: Hugging Face
- Model Type: Transformer-based language model
- Language(s): English
- License: Apache 2.0
- Related Models: DistilBERT-base-uncased
- Resources for more information:
- See this repository for more about Distil\* (a class of compressed models including this model)
- See Sanh et al. (2019) for more information about knowledge distillation and the training procedure
## How to Get Started with the Model
Use the code below to get started with the model.
Here is how to use this model in PyTorch:
And in TensorFlow:
## Uses
This model can be used for question answering.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The distilbert-base-uncased model model describes it's training data as:
> DistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).
To learn more about the SQuAD v1.1 dataset, see the SQuAD v1.1 data card.
#### Training Procedure
##### Preprocessing
See the distilbert-base-uncased model card for further details.
##### Pretraining
See the distilbert-base-uncased model card for further details.
## Evaluation
As discussed in the model repository
> This model reaches a F1 score of 86.9 on the [SQuAD v1.1] dev set (for comparison, Bert bert-base-uncased version reaches a F1 score of 88.5).
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and hours used based on the associated paper. Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD.
- Hardware Type: 8 16GB V100 GPUs
- Hours used: 90 hours
- Cloud Provider: Unknown
- Compute Region: Unknown
- Carbon Emitted: Unknown
## Technical Specifications
See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.
APA:
- Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
## Model Card Authors
This model card was written by the Hugging Face team.
| [
"# DistilBERT base uncased distilled SQuAD",
"## Table of Contents\n- Model Details\n- How To Get Started With the Model\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- Technical Specifications\n- Citation Information\n- Model Card Authors",
"## Model Details\n\nModel Description: The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT, and the paper DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter. DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than *bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark.\n\nThis model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1. \n\n- Developed by: Hugging Face\n- Model Type: Transformer-based language model\n- Language(s): English \n- License: Apache 2.0\n- Related Models: DistilBERT-base-uncased\n- Resources for more information:\n - See this repository for more about Distil\\* (a class of compressed models including this model)\n - See Sanh et al. (2019) for more information about knowledge distillation and the training procedure",
"## How to Get Started with the Model \n\nUse the code below to get started with the model. \n\n\n\nHere is how to use this model in PyTorch:\n\n\n\nAnd in TensorFlow:",
"## Uses\n\nThis model can be used for question answering.",
"#### Misuse and Out-of-scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.",
"## Risks, Limitations and Biases\n\nCONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"## Training",
"#### Training Data\n\nThe distilbert-base-uncased model model describes it's training data as: \n\n> DistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).\n\nTo learn more about the SQuAD v1.1 dataset, see the SQuAD v1.1 data card.",
"#### Training Procedure",
"##### Preprocessing\n\nSee the distilbert-base-uncased model card for further details.",
"##### Pretraining\n\nSee the distilbert-base-uncased model card for further details.",
"## Evaluation\n\nAs discussed in the model repository\n\n> This model reaches a F1 score of 86.9 on the [SQuAD v1.1] dev set (for comparison, Bert bert-base-uncased version reaches a F1 score of 88.5).",
"## Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and hours used based on the associated paper. Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD.\n\n- Hardware Type: 8 16GB V100 GPUs\n- Hours used: 90 hours\n- Cloud Provider: Unknown\n- Compute Region: Unknown\n- Carbon Emitted: Unknown",
"## Technical Specifications\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\n\n\n\nAPA: \n- Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"## Model Card Authors\n\nThis model card was written by the Hugging Face team."
] | [
"TAGS\n#transformers #pytorch #tf #tflite #coreml #safetensors #distilbert #question-answering #en #dataset-squad #arxiv-1910.01108 #arxiv-1910.09700 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"# DistilBERT base uncased distilled SQuAD",
"## Table of Contents\n- Model Details\n- How To Get Started With the Model\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- Technical Specifications\n- Citation Information\n- Model Card Authors",
"## Model Details\n\nModel Description: The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT, and the paper DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter. DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than *bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark.\n\nThis model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1. \n\n- Developed by: Hugging Face\n- Model Type: Transformer-based language model\n- Language(s): English \n- License: Apache 2.0\n- Related Models: DistilBERT-base-uncased\n- Resources for more information:\n - See this repository for more about Distil\\* (a class of compressed models including this model)\n - See Sanh et al. (2019) for more information about knowledge distillation and the training procedure",
"## How to Get Started with the Model \n\nUse the code below to get started with the model. \n\n\n\nHere is how to use this model in PyTorch:\n\n\n\nAnd in TensorFlow:",
"## Uses\n\nThis model can be used for question answering.",
"#### Misuse and Out-of-scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.",
"## Risks, Limitations and Biases\n\nCONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"## Training",
"#### Training Data\n\nThe distilbert-base-uncased model model describes it's training data as: \n\n> DistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).\n\nTo learn more about the SQuAD v1.1 dataset, see the SQuAD v1.1 data card.",
"#### Training Procedure",
"##### Preprocessing\n\nSee the distilbert-base-uncased model card for further details.",
"##### Pretraining\n\nSee the distilbert-base-uncased model card for further details.",
"## Evaluation\n\nAs discussed in the model repository\n\n> This model reaches a F1 score of 86.9 on the [SQuAD v1.1] dev set (for comparison, Bert bert-base-uncased version reaches a F1 score of 88.5).",
"## Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and hours used based on the associated paper. Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD.\n\n- Hardware Type: 8 16GB V100 GPUs\n- Hours used: 90 hours\n- Cloud Provider: Unknown\n- Compute Region: Unknown\n- Carbon Emitted: Unknown",
"## Technical Specifications\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\n\n\n\nAPA: \n- Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"## Model Card Authors\n\nThis model card was written by the Hugging Face team."
] |
text-classification | transformers |
# DistilBERT base uncased finetuned SST-2
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
## Model Details
**Model Description:** This model is a fine-tune checkpoint of [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased), fine-tuned on SST-2.
This model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7).
- **Developed by:** Hugging Face
- **Model Type:** Text Classification
- **Language(s):** English
- **License:** Apache-2.0
- **Parent Model:** For more details about DistilBERT, we encourage users to check out [this model card](https://huggingface.co/distilbert-base-uncased).
- **Resources for more information:**
- [Model Documentation](https://huggingface.co/docs/transformers/main/en/model_doc/distilbert#transformers.DistilBertForSequenceClassification)
- [DistilBERT paper](https://arxiv.org/abs/1910.01108)
## How to Get Started With the Model
Example of single-label classification:
```python
import torch
from transformers import DistilBertTokenizer, DistilBertForSequenceClassification
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
```
## Uses
#### Direct Use
This model can be used for topic classification. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
Based on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations.
For instance, for sentences like `This film was filmed in COUNTRY`, this binary classification model will give radically different probabilities for the positive label depending on the country (0.89 if the country is France, but 0.08 if the country is Afghanistan) when nothing in the input indicates such a strong semantic shift. In this [colab](https://colab.research.google.com/gist/ageron/fb2f64fb145b4bc7c49efc97e5f114d3/biasmap.ipynb), [Aurélien Géron](https://twitter.com/aureliengeron) made an interesting map plotting these probabilities for each country.
<img src="https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/resolve/main/map.jpeg" alt="Map of positive probabilities per country." width="500"/>
We strongly advise users to thoroughly probe these aspects on their use-cases in order to evaluate the risks of this model. We recommend looking at the following bias evaluation datasets as a place to start: [WinoBias](https://huggingface.co/datasets/wino_bias), [WinoGender](https://huggingface.co/datasets/super_glue), [Stereoset](https://huggingface.co/datasets/stereoset).
# Training
#### Training Data
The authors use the following Stanford Sentiment Treebank([sst2](https://huggingface.co/datasets/sst2)) corpora for the model.
#### Training Procedure
###### Fine-tuning hyper-parameters
- learning_rate = 1e-5
- batch_size = 32
- warmup = 600
- max_seq_length = 128
- num_train_epochs = 3.0
| {"language": "en", "license": "apache-2.0", "datasets": ["sst2", "glue"], "model-index": [{"name": "distilbert-base-uncased-finetuned-sst-2-english", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "sst2", "split": "validation"}, "metrics": [{"type": "accuracy", "value": 0.9105504587155964, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2YyOGMxYjY2Y2JhMjkxNjIzN2FmMjNiNmM2ZWViNGY3MTNmNWI2YzhiYjYxZTY0ZGUyN2M1NGIxZjRiMjQwZiIsInZlcnNpb24iOjF9.uui0srxV5ZHRhxbYN6082EZdwpnBgubPJ5R2-Wk8HTWqmxYE3QHidevR9LLAhidqGw6Ih93fK0goAXncld_gBg"}, {"type": "precision", "value": 0.8978260869565218, "name": "Precision", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzgwYTYwYjA2MmM0ZTYwNDk0M2NmNTBkZmM2NGNhYzQ1OGEyN2NkNDQ3Mzc2NTQyMmZiNDJiNzBhNGVhZGUyOSIsInZlcnNpb24iOjF9.eHjLmw3K02OU69R2Au8eyuSqT3aBDHgZCn8jSzE3_urD6EUSSsLxUpiAYR4BGLD_U6-ZKcdxVo_A2rdXqvUJDA"}, {"type": "recall", "value": 0.9301801801801802, "name": "Recall", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGIzM2E3MTI2Mzc2MDYwNmU3ZTVjYmZmZDBkNjY4ZTc5MGY0Y2FkNDU3NjY1MmVkNmE3Y2QzMzAwZDZhOWY1NiIsInZlcnNpb24iOjF9.PUZlqmct13-rJWBXdHm5tdkXgETL9F82GNbbSR4hI8MB-v39KrK59cqzFC2Ac7kJe_DtOeUyosj34O_mFt_1DQ"}, {"type": "auc", "value": 0.9716626673402374, "name": "AUC", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDM0YWIwZmQ4YjUwOGZmMWU2MjI1YjIxZGQ2MzNjMzRmZmYxMzZkNGFjODhlMDcyZDM1Y2RkMWZlOWQ0MWYwNSIsInZlcnNpb24iOjF9.E7GRlAXmmpEkTHlXheVkuL1W4WNjv4JO3qY_WCVsTVKiO7bUu0UVjPIyQ6g-J1OxsfqZmW3Leli1wY8vPBNNCQ"}, {"type": "f1", "value": 0.9137168141592922, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGU4MjNmOGYwZjZjMDQ1ZTkyZTA4YTc1MWYwOTM0NDM4ZWY1ZGVkNDY5MzNhYTQyZGFlNzIyZmUwMDg3NDU0NyIsInZlcnNpb24iOjF9.mW5ftkq50Se58M-jm6a2Pu93QeKa3MfV7xcBwvG3PSB_KNJxZWTCpfMQp-Cmx_EMlmI2siKOyd8akYjJUrzJCA"}, {"type": "loss", "value": 0.39013850688934326, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTZiNzAyZDc0MzUzMmE1MGJiN2JlYzFiODE5ZTNlNGE4MmI4YzRiMTc2ODEzMTUwZmEzOTgxNzc4YjJjZTRmNiIsInZlcnNpb24iOjF9.VqIC7uYC-ZZ8ss9zQOlRV39YVOOLc5R36sIzCcVz8lolh61ux_5djm2XjpP6ARc6KqEnXC4ZtfNXsX2HZfrtCQ"}]}, {"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "sst2", "type": "sst2", "config": "default", "split": "train"}, "metrics": [{"type": "accuracy", "value": 0.9885521685548412, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2I3NzU3YzhmMDkxZTViY2M3OTY1NmI0ZTdmMDQxNjNjYzJiZmQxNzczM2E4YmExYTY5ODY0NDBkY2I4ZjNkOCIsInZlcnNpb24iOjF9.4Gtk3FeVc9sPWSqZIaeUXJ9oVlPzm-NmujnWpK2y5s1Vhp1l6Y1pK5_78wW0-NxSvQqV6qd5KQf_OAEpVAkQDA"}, {"type": "precision", "value": 0.9881965062029833, "name": "Precision Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDdlZDMzY2I3MTAwYTljNmM4MGMyMzU2YjAzZDg1NDYwN2ZmM2Y5OWZhMjUyMGJiNjY1YmZiMzFhMDI2ODFhNyIsInZlcnNpb24iOjF9.cqmv6yBxu4St2mykRWrZ07tDsiSLdtLTz2hbqQ7Gm1rMzq9tdlkZ8MyJRxtME_Y8UaOG9rs68pV-gKVUs8wABw"}, {"type": "precision", "value": 0.9885521685548412, "name": "Precision Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjFlYzAzNmE1YjljNjUwNzBjZjEzZDY0ZDQyMmY5ZWM2OTBhNzNjYjYzYTk1YWE1NjU3YTMxZDQwOTE1Y2FkNyIsInZlcnNpb24iOjF9.jnCHOkUHuAOZZ_ZMVOnetx__OVJCS6LOno4caWECAmfrUaIPnPNV9iJ6izRO3sqkHRmxYpWBb-27GJ4N3LU-BQ"}, {"type": "precision", "value": 0.9885639626373408, "name": "Precision Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGUyODFjNjBlNTE2MTY3ZDAxOGU1N2U0YjUyY2NiZjhkOGVmYThjYjBkNGU3NTRkYzkzNDQ2MmMwMjkwMWNiMyIsInZlcnNpb24iOjF9.zTNabMwApiZyXdr76QUn7WgGB7D7lP-iqS3bn35piqVTNsv3wnKjZOaKFVLIUvtBXq4gKw7N2oWxvWc4OcSNDg"}, {"type": "recall", "value": 0.9886145346602994, "name": "Recall Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTU1YjlhODU3YTkyNTdiZDcwZGFlZDBiYjY0N2NjMGM2NTRiNjQ3MDNjNGMxOWY2ZGQ4NWU1YmMzY2UwZTI3YSIsInZlcnNpb24iOjF9.xaLPY7U-wHsJ3DDui1yyyM-xWjL0Jz5puRThy7fczal9x05eKEQ9s0a_WD-iLmapvJs0caXpV70hDe2NLcs-DA"}, {"type": "recall", "value": 0.9885521685548412, "name": "Recall Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODE0YTU0MDBlOGY4YzU0MjY5MzA3OTk2OGNhOGVkMmU5OGRjZmFiZWI2ZjY5ODEzZTQzMTI0N2NiOTVkNDliYiIsInZlcnNpb24iOjF9.SOt1baTBbuZRrsvGcak2sUwoTrQzmNCbyV2m1_yjGsU48SBH0NcKXicidNBSnJ6ihM5jf_Lv_B5_eOBkLfNWDQ"}, {"type": "recall", "value": 0.9885521685548412, "name": "Recall Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWNkNmM0ZGRlNmYxYzIwNDk4OTI5MzIwZWU1NzZjZDVhMDcyNDFlMjBhNDQxODU5OWMwMWNhNGEzNjY3ZGUyOSIsInZlcnNpb24iOjF9.b15Fh70GwtlG3cSqPW-8VEZT2oy0CtgvgEOtWiYonOovjkIQ4RSLFVzVG-YfslaIyfg9RzMWzjhLnMY7Bpn2Aw"}, {"type": "f1", "value": 0.9884019815052447, "name": "F1 Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmM4NjQ5Yjk5ODRhYTU1MTY3MmRhZDBmODM1NTg3OTFiNWM4NDRmYjI0MzZkNmQ1MzE3MzcxODZlYzBkYTMyYSIsInZlcnNpb24iOjF9.74RaDK8nBVuGRl2Se_-hwQvP6c4lvVxGHpcCWB4uZUCf2_HoC9NT9u7P3pMJfH_tK2cpV7U3VWGgSDhQDi-UBQ"}, {"type": "f1", "value": 0.9885521685548412, "name": "F1 Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDRmYWRmMmQ0YjViZmQxMzhhYTUyOTE1MTc0ZDU1ZjQyZjFhMDYzYzMzZDE0NzZlYzQyOTBhMTBhNmM5NTlkMiIsInZlcnNpb24iOjF9.VMn_psdAHIZTlW6GbjERZDe8MHhwzJ0rbjV_VJyuMrsdOh5QDmko-wEvaBWNEdT0cEKsbggm-6jd3Gh81PfHAQ"}, {"type": "f1", "value": 0.9885546181087554, "name": "F1 Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjUyZWFhZDZhMGQ3MzBmYmRiNDVmN2FkZDBjMjk3ODk0OTAxNGZkMWE0NzU5ZjI0NzE0NGZiNzM0N2Y2NDYyOSIsInZlcnNpb24iOjF9.YsXBhnzEEFEW6jw3mQlFUuIrW7Gabad2Ils-iunYJr-myg0heF8NEnEWABKFE1SnvCWt-69jkLza6SupeyLVCA"}, {"type": "loss", "value": 0.040652573108673096, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTc3YjU3MjdjMzkxODA5MjU5NGUyY2NkMGVhZDg3ZWEzMmU1YWVjMmI0NmU2OWEyZTkzMTVjNDZiYTc0YjIyNCIsInZlcnNpb24iOjF9.lA90qXZVYiILHMFlr6t6H81Oe8a-4KmeX-vyCC1BDia2ofudegv6Vb46-4RzmbtuKeV6yy6YNNXxXxqVak1pAg"}]}]}]} | distilbert/distilbert-base-uncased-finetuned-sst-2-english | null | [
"transformers",
"pytorch",
"tf",
"rust",
"onnx",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:sst2",
"dataset:glue",
"arxiv:1910.01108",
"doi:10.57967/hf/0181",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1910.01108"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #rust #onnx #safetensors #distilbert #text-classification #en #dataset-sst2 #dataset-glue #arxiv-1910.01108 #doi-10.57967/hf/0181 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# DistilBERT base uncased finetuned SST-2
## Table of Contents
- Model Details
- How to Get Started With the Model
- Uses
- Risks, Limitations and Biases
- Training
## Model Details
Model Description: This model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned on SST-2.
This model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7).
- Developed by: Hugging Face
- Model Type: Text Classification
- Language(s): English
- License: Apache-2.0
- Parent Model: For more details about DistilBERT, we encourage users to check out this model card.
- Resources for more information:
- Model Documentation
- DistilBERT paper
## How to Get Started With the Model
Example of single-label classification:
## Uses
#### Direct Use
This model can be used for topic classification. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
Based on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations.
For instance, for sentences like 'This film was filmed in COUNTRY', this binary classification model will give radically different probabilities for the positive label depending on the country (0.89 if the country is France, but 0.08 if the country is Afghanistan) when nothing in the input indicates such a strong semantic shift. In this colab, Aurélien Géron made an interesting map plotting these probabilities for each country.
<img src="URL alt="Map of positive probabilities per country." width="500"/>
We strongly advise users to thoroughly probe these aspects on their use-cases in order to evaluate the risks of this model. We recommend looking at the following bias evaluation datasets as a place to start: WinoBias, WinoGender, Stereoset.
# Training
#### Training Data
The authors use the following Stanford Sentiment Treebank(sst2) corpora for the model.
#### Training Procedure
###### Fine-tuning hyper-parameters
- learning_rate = 1e-5
- batch_size = 32
- warmup = 600
- max_seq_length = 128
- num_train_epochs = 3.0
| [
"# DistilBERT base uncased finetuned SST-2",
"## Table of Contents\n- Model Details\n- How to Get Started With the Model\n- Uses\n- Risks, Limitations and Biases\n- Training",
"## Model Details\nModel Description: This model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned on SST-2.\nThis model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7).\n- Developed by: Hugging Face\n- Model Type: Text Classification\n- Language(s): English\n- License: Apache-2.0\n- Parent Model: For more details about DistilBERT, we encourage users to check out this model card.\n- Resources for more information:\n - Model Documentation\n - DistilBERT paper",
"## How to Get Started With the Model\n\nExample of single-label classification:\n",
"## Uses",
"#### Direct Use\n\nThis model can be used for topic classification. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.",
"#### Misuse and Out-of-scope Use\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.",
"## Risks, Limitations and Biases\n\nBased on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations.\n\nFor instance, for sentences like 'This film was filmed in COUNTRY', this binary classification model will give radically different probabilities for the positive label depending on the country (0.89 if the country is France, but 0.08 if the country is Afghanistan) when nothing in the input indicates such a strong semantic shift. In this colab, Aurélien Géron made an interesting map plotting these probabilities for each country.\n\n<img src=\"URL alt=\"Map of positive probabilities per country.\" width=\"500\"/>\n\nWe strongly advise users to thoroughly probe these aspects on their use-cases in order to evaluate the risks of this model. We recommend looking at the following bias evaluation datasets as a place to start: WinoBias, WinoGender, Stereoset.",
"# Training",
"#### Training Data\n\n\nThe authors use the following Stanford Sentiment Treebank(sst2) corpora for the model.",
"#### Training Procedure",
"###### Fine-tuning hyper-parameters\n\n\n- learning_rate = 1e-5\n- batch_size = 32\n- warmup = 600\n- max_seq_length = 128\n- num_train_epochs = 3.0"
] | [
"TAGS\n#transformers #pytorch #tf #rust #onnx #safetensors #distilbert #text-classification #en #dataset-sst2 #dataset-glue #arxiv-1910.01108 #doi-10.57967/hf/0181 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# DistilBERT base uncased finetuned SST-2",
"## Table of Contents\n- Model Details\n- How to Get Started With the Model\n- Uses\n- Risks, Limitations and Biases\n- Training",
"## Model Details\nModel Description: This model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned on SST-2.\nThis model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7).\n- Developed by: Hugging Face\n- Model Type: Text Classification\n- Language(s): English\n- License: Apache-2.0\n- Parent Model: For more details about DistilBERT, we encourage users to check out this model card.\n- Resources for more information:\n - Model Documentation\n - DistilBERT paper",
"## How to Get Started With the Model\n\nExample of single-label classification:\n",
"## Uses",
"#### Direct Use\n\nThis model can be used for topic classification. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.",
"#### Misuse and Out-of-scope Use\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.",
"## Risks, Limitations and Biases\n\nBased on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations.\n\nFor instance, for sentences like 'This film was filmed in COUNTRY', this binary classification model will give radically different probabilities for the positive label depending on the country (0.89 if the country is France, but 0.08 if the country is Afghanistan) when nothing in the input indicates such a strong semantic shift. In this colab, Aurélien Géron made an interesting map plotting these probabilities for each country.\n\n<img src=\"URL alt=\"Map of positive probabilities per country.\" width=\"500\"/>\n\nWe strongly advise users to thoroughly probe these aspects on their use-cases in order to evaluate the risks of this model. We recommend looking at the following bias evaluation datasets as a place to start: WinoBias, WinoGender, Stereoset.",
"# Training",
"#### Training Data\n\n\nThe authors use the following Stanford Sentiment Treebank(sst2) corpora for the model.",
"#### Training Procedure",
"###### Fine-tuning hyper-parameters\n\n\n- learning_rate = 1e-5\n- batch_size = 32\n- warmup = 600\n- max_seq_length = 128\n- num_train_epochs = 3.0"
] |
fill-mask | transformers |
# DistilBERT base model (uncased)
This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-uncased). It was
introduced in [this paper](https://arxiv.org/abs/1910.01108). The code for the distillation process can be found
[here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation). This model is uncased: it does
not make a difference between english and English.
## Model description
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained
with three objectives:
- Distillation loss: the model was trained to return the same probabilities as the BERT base model.
- Masked language modeling (MLM): this is part of the original training loss of the BERT base model. When taking a
sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the
model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that
usually see the words one after the other, or from autoregressive models like GPT which internally mask the future
tokens. It allows the model to learn a bidirectional representation of the sentence.
- Cosine embedding loss: the model was also trained to generate hidden states as close as possible as the BERT base
model.
This way, the model learns the same inner representation of the English language than its teacher model, while being
faster for inference or downstream tasks.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=distilbert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.05292855575680733,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.03968575969338417,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a business model. [SEP]",
'score': 0.034743521362543106,
'token': 2449,
'token_str': 'business'},
{'sequence': "[CLS] hello i'm a model model. [SEP]",
'score': 0.03462274372577667,
'token': 2944,
'token_str': 'model'},
{'sequence': "[CLS] hello i'm a modeling model. [SEP]",
'score': 0.018145186826586723,
'token': 11643,
'token_str': 'modeling'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import DistilBertTokenizer, DistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertModel.from_pretrained("distilbert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import DistilBertTokenizer, TFDistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = TFDistilBertModel.from_pretrained("distilbert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
[the bias of its teacher model](https://huggingface.co/bert-base-uncased#limitations-and-bias).
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
>>> unmasker("The White man worked as a [MASK].")
[{'sequence': '[CLS] the white man worked as a blacksmith. [SEP]',
'score': 0.1235365942120552,
'token': 20987,
'token_str': 'blacksmith'},
{'sequence': '[CLS] the white man worked as a carpenter. [SEP]',
'score': 0.10142576694488525,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the white man worked as a farmer. [SEP]',
'score': 0.04985016956925392,
'token': 7500,
'token_str': 'farmer'},
{'sequence': '[CLS] the white man worked as a miner. [SEP]',
'score': 0.03932540491223335,
'token': 18594,
'token_str': 'miner'},
{'sequence': '[CLS] the white man worked as a butcher. [SEP]',
'score': 0.03351764753460884,
'token': 14998,
'token_str': 'butcher'}]
>>> unmasker("The Black woman worked as a [MASK].")
[{'sequence': '[CLS] the black woman worked as a waitress. [SEP]',
'score': 0.13283951580524445,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the black woman worked as a nurse. [SEP]',
'score': 0.12586183845996857,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the black woman worked as a maid. [SEP]',
'score': 0.11708822101354599,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the black woman worked as a prostitute. [SEP]',
'score': 0.11499975621700287,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the black woman worked as a housekeeper. [SEP]',
'score': 0.04722772538661957,
'token': 22583,
'token_str': 'housekeeper'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset
consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia)
(excluding lists, tables and headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 16 GB V100 for 90 hours. See the
[training code](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for all hyperparameters
details.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
| | 82.2 | 88.5 | 89.2 | 91.3 | 51.3 | 85.8 | 87.5 | 59.9 |
### BibTeX entry and citation info
```bibtex
@article{Sanh2019DistilBERTAD,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
journal={ArXiv},
year={2019},
volume={abs/1910.01108}
}
```
<a href="https://huggingface.co/exbert/?model=distilbert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]} | distilbert/distilbert-base-uncased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"distilbert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1910.01108"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #rust #safetensors #distilbert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| DistilBERT base model (uncased)
===============================
This model is a distilled version of the BERT base model. It was
introduced in this paper. The code for the distillation process can be found
here. This model is uncased: it does
not make a difference between english and English.
Model description
-----------------
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained
with three objectives:
* Distillation loss: the model was trained to return the same probabilities as the BERT base model.
* Masked language modeling (MLM): this is part of the original training loss of the BERT base model. When taking a
sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the
model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that
usually see the words one after the other, or from autoregressive models like GPT which internally mask the future
tokens. It allows the model to learn a bidirectional representation of the sentence.
* Cosine embedding loss: the model was also trained to generate hidden states as close as possible as the BERT base
model.
This way, the model learns the same inner representation of the English language than its teacher model, while being
faster for inference or downstream tasks.
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
the bias of its teacher model.
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
DistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset
consisting of 11,038 unpublished books and English Wikipedia
(excluding lists, tables and headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 16 GB V100 for 90 hours. See the
training code for all hyperparameters
details.
Evaluation results
------------------
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. It also inherits some of\nthe bias of its teacher model.\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nDistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset\nconsisting of 11,038 unpublished books and English Wikipedia\n(excluding lists, tables and headers).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 8 16 GB V100 for 90 hours. See the\ntraining code for all hyperparameters\ndetails.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nGlue test results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #distilbert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. It also inherits some of\nthe bias of its teacher model.\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nDistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset\nconsisting of 11,038 unpublished books and English Wikipedia\n(excluding lists, tables and headers).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 8 16 GB V100 for 90 hours. See the\ntraining code for all hyperparameters\ndetails.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nGlue test results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] |
text-generation | transformers |
# DistilGPT2
DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of [GPT-2](https://huggingface.co/gpt2).
## Model Details
- **Developed by:** Hugging Face
- **Model type:** Transformer-based Language Model
- **Language:** English
- **License:** Apache 2.0
- **Model Description:** DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using [knowledge distillation](#knowledge-distillation) and was designed to be a faster, lighter version of GPT-2.
- **Resources for more information:** See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including Distilled-GPT2), [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure, and this page for more about [GPT-2](https://openai.com/blog/better-language-models/).
## Uses, Limitations and Risks
#### Limitations and Risks
<details>
<summary>Click to expand</summary>
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
As the developers of GPT-2 (OpenAI) note in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md), “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
DistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.
The impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example:
- [Silva, Tambwekar and Gombolay (2021)](https://aclanthology.org/2021.naacl-main.189.pdf) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.
- [Xu and Hu (2022)](https://arxiv.org/pdf/2201.08542.pdf) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias).
- [Gupta et al. (2022)](https://arxiv.org/pdf/2203.12574.pdf) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2.
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='distilgpt2')
>>> set_seed(48)
>>> generator("The White man worked as a", max_length=20, num_return_sequences=3)
[{'generated_text': "The White man worked as a salesman at a McDonald's restaurant called Kia at the time of the"},
{'generated_text': 'The White man worked as a contractor in the Army in the late 1990s. He became a "'},
{'generated_text': 'The White man worked as a police spokesman to the US Navy in the 1930s.'}]
>>> set_seed(48)
>>> generator("The Black man worked as a", max_length=20, num_return_sequences=3)
[{'generated_text': 'The Black man worked as a shop assistant for an hour at Wal-Mart at Wal-Mart in'},
{'generated_text': 'The Black man worked as a waiter in the hotel when he was assaulted when he got out of a'},
{'generated_text': 'The Black man worked as a police spokesman four months ago...'}]
```
</details>
#### Potential Uses
Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.
The developers of GPT-2 state in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:
> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*
> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*
> - *Entertainment: Creation of games, chat bots, and amusing generations.*
Using DistilGPT2, the Hugging Face team built the [Write With Transformers](https://transformer.huggingface.co/doc/distil-gpt2) web app, which allows users to play with the model to generate text directly from their browser.
#### Out-of-scope Uses
OpenAI states in the GPT-2 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.
### How to Get Started with the Model
<details>
<summary>Click to expand</summary>
*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*
Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='distilgpt2')
>>> set_seed(42)
>>> generator("Hello, I’m a language model", max_length=20, num_return_sequences=5)
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
[{'generated_text': "Hello, I'm a language model, I'm a language model. In my previous post I've"},
{'generated_text': "Hello, I'm a language model, and I'd love to hear what you think about it."},
{'generated_text': "Hello, I'm a language model, but I don't get much of a connection anymore, so"},
{'generated_text': "Hello, I'm a language model, a functional language... It's not an example, and that"},
{'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
model = GPT2Model.from_pretrained('distilgpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
And in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
model = TFGPT2Model.from_pretrained('distilgpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
</details>
## Training Data
DistilGPT2 was trained using [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the [OpenWebTextCorpus Dataset Card](https://huggingface.co/datasets/openwebtext) for additional information about OpenWebTextCorpus and [Radford et al. (2019)](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) for additional information about WebText.
## Training Procedure
The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108).
## Evaluation Results
The creators of DistilGPT2 [report](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) that, on the [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).
## Environmental Impact
*Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
- **Hardware Type:** 8 16GB V100
- **Hours used:** 168 (1 week)
- **Cloud Provider:** Azure
- **Compute Region:** unavailable, assumed East US for calculations
- **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2
## Citation
```bibtex
@inproceedings{sanh2019distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas},
booktitle={NeurIPS EMC^2 Workshop},
year={2019}
}
```
## Glossary
- <a name="knowledge-distillation">**Knowledge Distillation**</a>: As described in [Sanh et al. (2019)](https://arxiv.org/pdf/1910.01108.pdf), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see [Bucila et al. (2006)](https://www.cs.cornell.edu/~caruana/compression.kdd06.pdf) and [Hinton et al. (2015)](https://arxiv.org/abs/1503.02531).
<a href="https://huggingface.co/exbert/?model=distilgpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["openwebtext"], "co2_eq_emissions": 149200, "model-index": [{"name": "distilgpt2", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "WikiText-103", "type": "wikitext"}, "metrics": [{"type": "perplexity", "value": 21.1, "name": "Perplexity"}]}]}]} | distilbert/distilgpt2 | null | [
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"coreml",
"safetensors",
"gpt2",
"text-generation",
"exbert",
"en",
"dataset:openwebtext",
"arxiv:1910.01108",
"arxiv:2201.08542",
"arxiv:2203.12574",
"arxiv:1910.09700",
"arxiv:1503.02531",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1910.01108",
"2201.08542",
"2203.12574",
"1910.09700",
"1503.02531"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #tflite #rust #coreml #safetensors #gpt2 #text-generation #exbert #en #dataset-openwebtext #arxiv-1910.01108 #arxiv-2201.08542 #arxiv-2203.12574 #arxiv-1910.09700 #arxiv-1503.02531 #license-apache-2.0 #model-index #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# DistilGPT2
DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of GPT-2.
## Model Details
- Developed by: Hugging Face
- Model type: Transformer-based Language Model
- Language: English
- License: Apache 2.0
- Model Description: DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using knowledge distillation and was designed to be a faster, lighter version of GPT-2.
- Resources for more information: See this repository for more about Distil\* (a class of compressed models including Distilled-GPT2), Sanh et al. (2019) for more information about knowledge distillation and the training procedure, and this page for more about GPT-2.
## Uses, Limitations and Risks
#### Limitations and Risks
<details>
<summary>Click to expand</summary>
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.
As the developers of GPT-2 (OpenAI) note in their model card, “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
DistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.
The impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example:
- Silva, Tambwekar and Gombolay (2021) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.
- Xu and Hu (2022) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias).
- Gupta et al. (2022) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2.
</details>
#### Potential Uses
Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.
The developers of GPT-2 state in their model card that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:
> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*
> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*
> - *Entertainment: Creation of games, chat bots, and amusing generations.*
Using DistilGPT2, the Hugging Face team built the Write With Transformers web app, which allows users to play with the model to generate text directly from their browser.
#### Out-of-scope Uses
OpenAI states in the GPT-2 model card:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.
### How to Get Started with the Model
<details>
<summary>Click to expand</summary>
*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*
Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
And in TensorFlow:
</details>
## Training Data
DistilGPT2 was trained using OpenWebTextCorpus, an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the OpenWebTextCorpus Dataset Card for additional information about OpenWebTextCorpus and Radford et al. (2019) for additional information about WebText.
## Training Procedure
The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in Sanh et al. (2019).
## Evaluation Results
The creators of DistilGPT2 report that, on the WikiText-103 benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).
## Environmental Impact
*Carbon emissions were estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
- Hardware Type: 8 16GB V100
- Hours used: 168 (1 week)
- Cloud Provider: Azure
- Compute Region: unavailable, assumed East US for calculations
- Carbon Emitted *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2
## Glossary
- <a name="knowledge-distillation">Knowledge Distillation</a>: As described in Sanh et al. (2019), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see Bucila et al. (2006) and Hinton et al. (2015).
<a href="URL
<img width="300px" src="URL
</a>
| [
"# DistilGPT2\n\nDistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of GPT-2.",
"## Model Details\n\n- Developed by: Hugging Face\n- Model type: Transformer-based Language Model\n- Language: English\n- License: Apache 2.0\n- Model Description: DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using knowledge distillation and was designed to be a faster, lighter version of GPT-2.\n- Resources for more information: See this repository for more about Distil\\* (a class of compressed models including Distilled-GPT2), Sanh et al. (2019) for more information about knowledge distillation and the training procedure, and this page for more about GPT-2.",
"## Uses, Limitations and Risks",
"#### Limitations and Risks\n\n<details>\n<summary>Click to expand</summary>\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nAs the developers of GPT-2 (OpenAI) note in their model card, “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). \n\nDistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.\n\nThe impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example: \n\n- Silva, Tambwekar and Gombolay (2021) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.\n- Xu and Hu (2022) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias). \n- Gupta et al. (2022) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2. \n\n\n\n</details>",
"#### Potential Uses\n\nSince DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model. \n\nThe developers of GPT-2 state in their model card that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including: \n\n> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*\n> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*\n> - *Entertainment: Creation of games, chat bots, and amusing generations.*\n\nUsing DistilGPT2, the Hugging Face team built the Write With Transformers web app, which allows users to play with the model to generate text directly from their browser.",
"#### Out-of-scope Uses\n\nOpenAI states in the GPT-2 model card: \n\n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n>\n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.",
"### How to Get Started with the Model \n\n<details>\n<summary>Click to expand</summary>\n\n*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*\n\nUsing DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:\n\n \n \nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nAnd in TensorFlow:\n\n\n\n</details>",
"## Training Data\n\nDistilGPT2 was trained using OpenWebTextCorpus, an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the OpenWebTextCorpus Dataset Card for additional information about OpenWebTextCorpus and Radford et al. (2019) for additional information about WebText.",
"## Training Procedure\n\nThe texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in Sanh et al. (2019).",
"## Evaluation Results\n\nThe creators of DistilGPT2 report that, on the WikiText-103 benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).",
"## Environmental Impact\n\n*Carbon emissions were estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*\n\n- Hardware Type: 8 16GB V100\n- Hours used: 168 (1 week)\n- Cloud Provider: Azure\n- Compute Region: unavailable, assumed East US for calculations\n- Carbon Emitted *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2",
"## Glossary\n\n-\t<a name=\"knowledge-distillation\">Knowledge Distillation</a>: As described in Sanh et al. (2019), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see Bucila et al. (2006) and Hinton et al. (2015).\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #tf #jax #tflite #rust #coreml #safetensors #gpt2 #text-generation #exbert #en #dataset-openwebtext #arxiv-1910.01108 #arxiv-2201.08542 #arxiv-2203.12574 #arxiv-1910.09700 #arxiv-1503.02531 #license-apache-2.0 #model-index #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# DistilGPT2\n\nDistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of GPT-2.",
"## Model Details\n\n- Developed by: Hugging Face\n- Model type: Transformer-based Language Model\n- Language: English\n- License: Apache 2.0\n- Model Description: DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using knowledge distillation and was designed to be a faster, lighter version of GPT-2.\n- Resources for more information: See this repository for more about Distil\\* (a class of compressed models including Distilled-GPT2), Sanh et al. (2019) for more information about knowledge distillation and the training procedure, and this page for more about GPT-2.",
"## Uses, Limitations and Risks",
"#### Limitations and Risks\n\n<details>\n<summary>Click to expand</summary>\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nAs the developers of GPT-2 (OpenAI) note in their model card, “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). \n\nDistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.\n\nThe impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example: \n\n- Silva, Tambwekar and Gombolay (2021) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.\n- Xu and Hu (2022) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias). \n- Gupta et al. (2022) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2. \n\n\n\n</details>",
"#### Potential Uses\n\nSince DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model. \n\nThe developers of GPT-2 state in their model card that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including: \n\n> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*\n> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*\n> - *Entertainment: Creation of games, chat bots, and amusing generations.*\n\nUsing DistilGPT2, the Hugging Face team built the Write With Transformers web app, which allows users to play with the model to generate text directly from their browser.",
"#### Out-of-scope Uses\n\nOpenAI states in the GPT-2 model card: \n\n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n>\n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.",
"### How to Get Started with the Model \n\n<details>\n<summary>Click to expand</summary>\n\n*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*\n\nUsing DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:\n\n \n \nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nAnd in TensorFlow:\n\n\n\n</details>",
"## Training Data\n\nDistilGPT2 was trained using OpenWebTextCorpus, an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the OpenWebTextCorpus Dataset Card for additional information about OpenWebTextCorpus and Radford et al. (2019) for additional information about WebText.",
"## Training Procedure\n\nThe texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in Sanh et al. (2019).",
"## Evaluation Results\n\nThe creators of DistilGPT2 report that, on the WikiText-103 benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).",
"## Environmental Impact\n\n*Carbon emissions were estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*\n\n- Hardware Type: 8 16GB V100\n- Hours used: 168 (1 week)\n- Cloud Provider: Azure\n- Compute Region: unavailable, assumed East US for calculations\n- Carbon Emitted *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2",
"## Glossary\n\n-\t<a name=\"knowledge-distillation\">Knowledge Distillation</a>: As described in Sanh et al. (2019), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see Bucila et al. (2006) and Hinton et al. (2015).\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
fill-mask | transformers |
# Model Card for DistilRoBERTa base
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Citation](#citation)
8. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
This model is a distilled version of the [RoBERTa-base model](https://huggingface.co/roberta-base). It follows the same training procedure as [DistilBERT](https://huggingface.co/distilbert-base-uncased).
The code for the distillation process can be found [here](https://github.com/huggingface/transformers/tree/master/examples/distillation).
This model is case-sensitive: it makes a difference between english and English.
The model has 6 layers, 768 dimension and 12 heads, totalizing 82M parameters (compared to 125M parameters for RoBERTa-base).
On average DistilRoBERTa is twice as fast as Roberta-base.
We encourage users of this model card to check out the [RoBERTa-base model card](https://huggingface.co/roberta-base) to learn more about usage, limitations and potential biases.
- **Developed by:** Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf (Hugging Face)
- **Model type:** Transformer-based language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [RoBERTa-base model card](https://huggingface.co/roberta-base)
- **Resources for more information:**
- [GitHub Repository](https://github.com/huggingface/transformers/blob/main/examples/research_projects/distillation/README.md)
- [Associated Paper](https://arxiv.org/abs/1910.01108)
# Uses
## Direct Use and Downstream Use
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=roberta) to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2.
## Out of Scope Use
The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilroberta-base')
>>> unmasker("The man worked as a <mask>.")
[{'score': 0.1237526461482048,
'sequence': 'The man worked as a waiter.',
'token': 38233,
'token_str': ' waiter'},
{'score': 0.08968018740415573,
'sequence': 'The man worked as a waitress.',
'token': 35698,
'token_str': ' waitress'},
{'score': 0.08387645334005356,
'sequence': 'The man worked as a bartender.',
'token': 33080,
'token_str': ' bartender'},
{'score': 0.061059024184942245,
'sequence': 'The man worked as a mechanic.',
'token': 25682,
'token_str': ' mechanic'},
{'score': 0.03804653510451317,
'sequence': 'The man worked as a courier.',
'token': 37171,
'token_str': ' courier'}]
>>> unmasker("The woman worked as a <mask>.")
[{'score': 0.23149248957633972,
'sequence': 'The woman worked as a waitress.',
'token': 35698,
'token_str': ' waitress'},
{'score': 0.07563332468271255,
'sequence': 'The woman worked as a waiter.',
'token': 38233,
'token_str': ' waiter'},
{'score': 0.06983394920825958,
'sequence': 'The woman worked as a bartender.',
'token': 33080,
'token_str': ' bartender'},
{'score': 0.05411609262228012,
'sequence': 'The woman worked as a nurse.',
'token': 9008,
'token_str': ' nurse'},
{'score': 0.04995106905698776,
'sequence': 'The woman worked as a maid.',
'token': 29754,
'token_str': ' maid'}]
```
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training Details
DistilRoBERTa was pre-trained on [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), a reproduction of OpenAI's WebText dataset (it is ~4 times less training data than the teacher RoBERTa). See the [roberta-base model card](https://huggingface.co/roberta-base/blob/main/README.md) for further details on training.
# Evaluation
When fine-tuned on downstream tasks, this model achieves the following results (see [GitHub Repo](https://github.com/huggingface/transformers/blob/main/examples/research_projects/distillation/README.md)):
Glue test results:
| Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
| | 84.0 | 89.4 | 90.8 | 92.5 | 59.3 | 88.3 | 86.6 | 67.9 |
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
```bibtex
@article{Sanh2019DistilBERTAD,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
journal={ArXiv},
year={2019},
volume={abs/1910.01108}
}
```
APA
- Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
# How to Get Started With the Model
You can use the model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilroberta-base')
>>> unmasker("Hello I'm a <mask> model.")
[{'score': 0.04673689603805542,
'sequence': "Hello I'm a business model.",
'token': 265,
'token_str': ' business'},
{'score': 0.03846118599176407,
'sequence': "Hello I'm a freelance model.",
'token': 18150,
'token_str': ' freelance'},
{'score': 0.03308931365609169,
'sequence': "Hello I'm a fashion model.",
'token': 2734,
'token_str': ' fashion'},
{'score': 0.03018997237086296,
'sequence': "Hello I'm a role model.",
'token': 774,
'token_str': ' role'},
{'score': 0.02111748233437538,
'sequence': "Hello I'm a Playboy model.",
'token': 24526,
'token_str': ' Playboy'}]
```
<a href="https://huggingface.co/exbert/?model=distilroberta-base">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["openwebtext"]} | distilbert/distilroberta-base | null | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"roberta",
"fill-mask",
"exbert",
"en",
"dataset:openwebtext",
"arxiv:1910.01108",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1910.01108",
"1910.09700"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #rust #safetensors #roberta #fill-mask #exbert #en #dataset-openwebtext #arxiv-1910.01108 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| Model Card for DistilRoBERTa base
=================================
Table of Contents
=================
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training Details
5. Evaluation
6. Environmental Impact
7. Citation
8. How To Get Started With the Model
Model Details
=============
Model Description
-----------------
This model is a distilled version of the RoBERTa-base model. It follows the same training procedure as DistilBERT.
The code for the distillation process can be found here.
This model is case-sensitive: it makes a difference between english and English.
The model has 6 layers, 768 dimension and 12 heads, totalizing 82M parameters (compared to 125M parameters for RoBERTa-base).
On average DistilRoBERTa is twice as fast as Roberta-base.
We encourage users of this model card to check out the RoBERTa-base model card to learn more about usage, limitations and potential biases.
* Developed by: Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf (Hugging Face)
* Model type: Transformer-based language model
* Language(s) (NLP): English
* License: Apache 2.0
* Related Models: RoBERTa-base model card
* Resources for more information:
+ GitHub Repository
+ Associated Paper
Uses
====
Direct Use and Downstream Use
-----------------------------
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2.
Out of Scope Use
----------------
The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
Bias, Risks, and Limitations
============================
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
Recommendations
---------------
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Training Details
================
DistilRoBERTa was pre-trained on OpenWebTextCorpus, a reproduction of OpenAI's WebText dataset (it is ~4 times less training data than the teacher RoBERTa). See the roberta-base model card for further details on training.
Evaluation
==========
When fine-tuned on downstream tasks, this model achieves the following results (see GitHub Repo):
Glue test results:
Environmental Impact
====================
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
* Hardware Type: More information needed
* Hours used: More information needed
* Cloud Provider: More information needed
* Compute Region: More information needed
* Carbon Emitted: More information needed
APA
* Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
How to Get Started With the Model
=================================
You can use the model directly with a pipeline for masked language modeling:
<a href="URL
<img width="300px" src="URL
| [] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #roberta #fill-mask #exbert #en #dataset-openwebtext #arxiv-1910.01108 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text-generation | transformers |
# GPT-2 Large
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-author)
## Model Details
**Model Description:** GPT-2 Large is the **774M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT-2](https://huggingface.co/gpt2), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-large')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, I can do language modeling. In fact, this is one of the reasons I use languages. To get a"},
{'generated_text': "Hello, I'm a language model, which in its turn implements a model of how a human can reason about a language, and is in turn an"},
{'generated_text': "Hello, I'm a language model, why does this matter for you?\n\nWhen I hear new languages, I tend to start thinking in terms"},
{'generated_text': "Hello, I'm a language model, a functional language...\n\nI don't need to know anything else. If I want to understand about how"},
{'generated_text': "Hello, I'm a language model, not a toolbox.\n\nIn a nutshell, a language model is a set of attributes that define how"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
model = GPT2Model.from_pretrained('gpt2-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
model = TFGPT2Model.from_pretrained('gpt2-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-large')
>>> set_seed(42)
>>> generator("The man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The man worked as a security guard in a hotel'},
{'generated_text': 'The man worked as a salesman in Mexico and in'},
{'generated_text': 'The man worked as a supervisor at the warehouse for'},
{'generated_text': "The man worked as a cleaner for the store's"},
{'generated_text': 'The man worked as a barbershop apprentice.'}]
>>> set_seed(42)
>>> generator("The woman worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The woman worked as a clerk at the bank.'},
{'generated_text': 'The woman worked as a caregiver, and her'},
{'generated_text': 'The woman worked as a customer service agent for a'},
{'generated_text': 'The woman worked as a cleaner at the store,'},
{'generated_text': 'The woman worked as a barista and was "'}]
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 10.87 | 60.12 | 93.45 | 88.0 | 19.93 | 40.31 | 0.97 | 1.02 | 22.05 | 44.575|
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team. | {"language": "en", "license": "mit"} | openai-community/gpt2-large | null | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"onnx",
"safetensors",
"gpt2",
"text-generation",
"en",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1910.09700"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #rust #onnx #safetensors #gpt2 #text-generation #en #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
| GPT-2 Large
===========
Table of Contents
-----------------
* Model Details
* How To Get Started With the Model
* Uses
* Risks, Limitations and Biases
* Training
* Evaluation
* Environmental Impact
* Technical Specifications
* Citation Information
* Model Card Authors
Model Details
-------------
Model Description: GPT-2 Large is the 774M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
* Developed by: OpenAI, see associated research paper and GitHub repo for model developers.
* Model Type: Transformer-based language model
* Language(s): English
* License: Modified MIT License
* Related Models: GPT-2, GPT-Medium and GPT-XL
* Resources for more information:
+ Research Paper
+ OpenAI Blog Post
+ GitHub Repo
+ OpenAI Model Card for GPT-2
+ Test the full generation capabilities here: URL
How to Get Started with the Model
---------------------------------
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Uses
----
#### Direct Use
In their model card about GPT-2, OpenAI wrote:
>
> The primary intended users of these models are AI researchers and practitioners.
>
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
>
>
>
#### Downstream Use
In their model card about GPT-2, OpenAI wrote:
>
> Here are some secondary use cases we believe are likely:
>
>
> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> * Entertainment: Creation of games, chat bots, and amusing generations.
>
>
>
#### Misuse and Out-of-scope Use
In their model card about GPT-2, OpenAI wrote:
>
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
>
>
>
Risks, Limitations and Biases
-----------------------------
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Training
--------
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here.
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
Evaluation
----------
The following evaluation information is extracted from the associated paper.
#### Testing Data, Factors and Metrics
The model authors write in the associated paper that:
>
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
>
>
>
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
Environmental Impact
--------------------
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
* Hardware Type: Unknown
* Hours used: Unknown
* Cloud Provider: Unknown
* Compute Region: Unknown
* Carbon Emitted: Unknown
Technical Specifications
------------------------
See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.
Model Card Authors
------------------
This model card was written by the Hugging Face team.
| [
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: Unknown\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #onnx #safetensors #gpt2 #text-generation #en #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: Unknown\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] |
text-generation | transformers |
# GPT-2 Medium
## Model Details
**Model Description:** GPT-2 Medium is the **355M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-medium')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, I'm a language. I'm a compiler, I'm a parser, I'm a server process. I"},
{'generated_text': "Hello, I'm a language model, and I'd like to join an existing team. What can I do to get started?\n\nI'd"},
{'generated_text': "Hello, I'm a language model, why does my code get created? Can't I just copy it? But why did my code get created when"},
{'generated_text': "Hello, I'm a language model, a functional language...\n\nI'm a functional language. Is it hard? A little, yes. But"},
{'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I need to give me objects from which I can get"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = GPT2Model.from_pretrained('gpt2-medium')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = TFGPT2Model.from_pretrained('gpt2-medium')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-medium')
>>> set_seed(42)
>>> generator("The man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The man worked as a security guard in a military'},
{'generated_text': 'The man worked as a salesman in Mexico and eventually'},
{'generated_text': 'The man worked as a supervisor at the department for'},
{'generated_text': 'The man worked as a cleaner for the same corporation'},
{'generated_text': 'The man worked as a barman and was involved'}]
>>> set_seed(42)
>>> generator("The woman worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The woman worked as a social worker in a children'},
{'generated_text': 'The woman worked as a marketing manager, and her'},
{'generated_text': 'The woman worked as a customer service agent in a'},
{'generated_text': 'The woman worked as a cleaner for the same corporation'},
{'generated_text': 'The woman worked as a barista and was involved'}]
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 15.60 | 55.48 | 92.35 | 87.1 | 22.76 | 47.33 | 1.01 | 1.06 | 26.37 | 55.72 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team. | {"language": "en", "license": "mit"} | openai-community/gpt2-medium | null | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"onnx",
"safetensors",
"gpt2",
"text-generation",
"en",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1910.09700"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #rust #onnx #safetensors #gpt2 #text-generation #en #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
| GPT-2 Medium
============
Model Details
-------------
Model Description: GPT-2 Medium is the 355M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
* Developed by: OpenAI, see associated research paper and GitHub repo for model developers.
* Model Type: Transformer-based language model
* Language(s): English
* License: Modified MIT License
* Related Models: GPT2, GPT2-Large and GPT2-XL
* Resources for more information:
+ Research Paper
+ OpenAI Blog Post
+ GitHub Repo
+ OpenAI Model Card for GPT-2
+ Test the full generation capabilities here: URL
How to Get Started with the Model
---------------------------------
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Uses
----
#### Direct Use
In their model card about GPT-2, OpenAI wrote:
>
> The primary intended users of these models are AI researchers and practitioners.
>
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
>
>
>
#### Downstream Use
In their model card about GPT-2, OpenAI wrote:
>
> Here are some secondary use cases we believe are likely:
>
>
> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> * Entertainment: Creation of games, chat bots, and amusing generations.
>
>
>
#### Misuse and Out-of-scope Use
In their model card about GPT-2, OpenAI wrote:
>
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
>
>
>
Risks, Limitations and Biases
-----------------------------
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Training
--------
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here.
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
Evaluation
----------
The following evaluation information is extracted from the associated paper.
#### Testing Data, Factors and Metrics
The model authors write in the associated paper that:
>
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
>
>
>
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
Environmental Impact
--------------------
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
* Hardware Type: Unknown
* Hours used: Unknown
* Cloud Provider: Unknown
* Compute Region: Unknown
* Carbon Emitted: Unknown
Technical Specifications
------------------------
See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.
Model Card Authors
------------------
This model card was written by the Hugging Face team.
| [
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: Unknown\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #onnx #safetensors #gpt2 #text-generation #en #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: Unknown\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] |
text-generation | transformers |
# GPT-2 XL
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
## Model Details
**Model Description:** GPT-2 XL is the **1.5B parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT-2](https://huggingface.co/gpt2), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-Large](https://huggingface.co/gpt2-large)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- [OpenAI GPT-2 1.5B Release Blog Post](https://openai.com/blog/gpt-2-1-5b-release/)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2-xl')
set_seed(42)
generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')
model = GPT2Model.from_pretrained('gpt2-xl')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')
model = TFGPT2Model.from_pretrained('gpt2-xl')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
#### Biases
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2-xl')
set_seed(42)
generator("The man worked as a", max_length=10, num_return_sequences=5)
set_seed(42)
generator("The woman worked as a", max_length=10, num_return_sequences=5)
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
When they released the 1.5B parameter model, OpenAI wrote in a [blog post](https://openai.com/blog/gpt-2-1-5b-release/):
> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.
The blog post further discusses the risks, limitations, and biases of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 8.63 | 63.24 | 93.30 | 89.05 | 18.34 | 35.76 | 0.93 | 0.98 | 17.48 | 42.16 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware type and hours used are based on information provided by one of the model authors on [Reddit](https://bit.ly/2Tw1x4L).
- **Hardware Type:** 32 TPUv3 chips
- **Hours used:** 168
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team. | {"language": "en", "license": "mit"} | openai-community/gpt2-xl | null | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"gpt2",
"text-generation",
"en",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1910.09700"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #rust #safetensors #gpt2 #text-generation #en #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
| GPT-2 XL
========
Table of Contents
-----------------
* Model Details
* How To Get Started With the Model
* Uses
* Risks, Limitations and Biases
* Training
* Evaluation
* Environmental Impact
* Technical Specifications
* Citation Information
* Model Card Authors
Model Details
-------------
Model Description: GPT-2 XL is the 1.5B parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
* Developed by: OpenAI, see associated research paper and GitHub repo for model developers.
* Model Type: Transformer-based language model
* Language(s): English
* License: Modified MIT License
* Related Models: GPT-2, GPT-Medium and GPT-Large
* Resources for more information:
+ Research Paper
+ OpenAI Blog Post
+ GitHub Repo
+ OpenAI Model Card for GPT-2
+ OpenAI GPT-2 1.5B Release Blog Post
+ Test the full generation capabilities here: URL
How to Get Started with the Model
---------------------------------
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Uses
----
#### Direct Use
In their model card about GPT-2, OpenAI wrote:
>
> The primary intended users of these models are AI researchers and practitioners.
>
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
>
>
>
#### Downstream Use
In their model card about GPT-2, OpenAI wrote:
>
> Here are some secondary use cases we believe are likely:
>
>
> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> * Entertainment: Creation of games, chat bots, and amusing generations.
>
>
>
#### Misuse and Out-of-scope Use
In their model card about GPT-2, OpenAI wrote:
>
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
>
>
>
Risks, Limitations and Biases
-----------------------------
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.
#### Biases
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
When they released the 1.5B parameter model, OpenAI wrote in a blog post:
>
> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.
>
>
>
The blog post further discusses the risks, limitations, and biases of the model.
Training
--------
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here.
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
Evaluation
----------
The following evaluation information is extracted from the associated paper.
#### Testing Data, Factors and Metrics
The model authors write in the associated paper that:
>
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
>
>
>
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
Environmental Impact
--------------------
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware type and hours used are based on information provided by one of the model authors on Reddit.
* Hardware Type: 32 TPUv3 chips
* Hours used: 168
* Cloud Provider: Unknown
* Compute Region: Unknown
* Carbon Emitted: Unknown
Technical Specifications
------------------------
See the associated paper for details on the modeling architecture, objective, and training details.
Model Card Authors
------------------
This model card was written by the Hugging Face team.
| [
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.",
"#### Biases\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"#### Risks and Limitations\n\n\nWhen they released the 1.5B parameter model, OpenAI wrote in a blog post:\n\n\n\n> \n> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.\n> \n> \n> \n\n\nThe blog post further discusses the risks, limitations, and biases of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware type and hours used are based on information provided by one of the model authors on Reddit.\n\n\n* Hardware Type: 32 TPUv3 chips\n* Hours used: 168\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #gpt2 #text-generation #en #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"#### Direct Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> The primary intended users of these models are AI researchers and practitioners.\n> \n> \n> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.\n> \n> \n>",
"#### Downstream Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Here are some secondary use cases we believe are likely:\n> \n> \n> * Writing assistance: Grammar assistance, autocompletion (for normal prose or code)\n> * Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.\n> * Entertainment: Creation of games, chat bots, and amusing generations.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nIn their model card about GPT-2, OpenAI wrote:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.",
"#### Biases\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"#### Risks and Limitations\n\n\nWhen they released the 1.5B parameter model, OpenAI wrote in a blog post:\n\n\n\n> \n> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.\n> \n> \n> \n\n\nThe blog post further discusses the risks, limitations, and biases of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.",
"#### Training Procedure\n\n\nThe model is pretrained on a very large corpus of English data in a self-supervised fashion. This\nmeans it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots\nof publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,\nit was trained to guess the next word in sentences.\n\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,\nshifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the\npredictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.\n\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks.\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\n\nThe model authors write in the associated paper that:\n\n\n\n> \n> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.\n> \n> \n>",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware type and hours used are based on information provided by one of the model authors on Reddit.\n\n\n* Hardware Type: 32 TPUv3 chips\n* Hours used: 168\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] |
text-generation | transformers |
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
This is the **smallest** version of GPT-2, with 124M parameters.
**Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "mit", "tags": ["exbert"]} | openai-community/gpt2 | null | [
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"onnx",
"safetensors",
"gpt2",
"text-generation",
"exbert",
"en",
"doi:10.57967/hf/0039",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #tflite #rust #onnx #safetensors #gpt2 #text-generation #exbert #en #doi-10.57967/hf/0039 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
| GPT-2
=====
Test the whole generation capabilities here: URL
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
this paper
and first released at this page.
Disclaimer: The team releasing GPT-2 also wrote a
model card for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
Model description
-----------------
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
This is the smallest version of GPT-2, with 124M parameters.
Related Models: GPT-Large, GPT-Medium and GPT-XL
Intended uses & limitations
---------------------------
You can use the raw model for text generation or fine-tune it to a downstream task. See the
model hub to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
model card:
>
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
>
>
>
Here's an example of how the model can have biased predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here.
Training procedure
------------------
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
Evaluation results
------------------
The model achieves the following results without any fine-tuning (zero-shot):
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we\nset a seed for reproducibility:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of\nunfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their\nmodel card:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases\n> that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do\n> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a\n> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,\n> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar\n> levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nHere's an example of how the model can have biased predictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nThe larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact\ndetails of training.\n\n\nEvaluation results\n------------------\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] | [
"TAGS\n#transformers #pytorch #tf #jax #tflite #rust #onnx #safetensors #gpt2 #text-generation #exbert #en #doi-10.57967/hf/0039 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we\nset a seed for reproducibility:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of\nunfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their\nmodel card:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases\n> that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do\n> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a\n> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,\n> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar\n> levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nHere's an example of how the model can have biased predictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nThe larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact\ndetails of training.\n\n\nEvaluation results\n------------------\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] |
text-generation | transformers |
# OpenAI GPT 1
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
## Model Details
**Model Description:** `openai-gpt` (a.k.a. "GPT-1") is the first transformer-based language model created and released by OpenAI. The model is a causal (unidirectional) transformer pre-trained using language modeling on a large corpus with long range dependencies.
- **Developed by:** Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever. See [associated research paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) and [GitHub repo](https://github.com/openai/finetune-transformer-lm) for model developers and contributors.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [MIT License](https://github.com/openai/finetune-transformer-lm/blob/master/LICENSE)
- **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Medium](https://huggingface.co/gpt2-medium), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl)
- **Resources for more information:**
- [Research Paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf)
- [OpenAI Blog Post](https://openai.com/blog/language-unsupervised/)
- [GitHub Repo](https://github.com/openai/finetune-transformer-lm)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='openai-gpt')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model,'he said, when i was finished.'ah well,'said the man,'that's"},
{'generated_text': 'Hello, I\'m a language model, " she said. \n she reached the bottom of the shaft and leaned a little further out. it was'},
{'generated_text': 'Hello, I\'m a language model, " she laughed. " we call that a\'white girl.\'or as we are called by the'},
{'generated_text': 'Hello, I\'m a language model, " said mr pin. " an\'the ones with the funny hats don\'t. " the rest of'},
{'generated_text': 'Hello, I\'m a language model, was\'ere \'bout to do some more dancin \', " he said, then his voice lowered to'}]
```
Here is how to use this model in PyTorch:
```python
from transformers import OpenAIGPTTokenizer, OpenAIGPTModel
import torch
tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt")
model = OpenAIGPTModel.from_pretrained("openai-gpt")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
and in TensorFlow:
```python
from transformers import OpenAIGPTTokenizer, TFOpenAIGPTModel
tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt")
model = TFOpenAIGPTModel.from_pretrained("openai-gpt")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
```
## Uses
#### Direct Use
This model can be used for language modeling tasks.
#### Downstream Use
Potential downstream uses of this model include tasks that leverage language models. In the [associated paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf), the model developers discuss evaluations of the model for tasks including natural language inference (NLI), question answering, semantic similarity, and text classification.
#### Misuse and Out-of-scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
#### Biases
**CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
Predictions generated by this model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='openai-gpt')
>>> set_seed(42)
>>> generator("The man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The man worked as a teacher for the college he'},
{'generated_text': 'The man worked as a janitor at the club.'},
{'generated_text': 'The man worked as a bodyguard in america. the'},
{'generated_text': 'The man worked as a clerk for one of the'},
{'generated_text': 'The man worked as a nurse, but there was'}]
>>> set_seed(42)
>>> generator("The woman worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The woman worked as a medical intern but is a'},
{'generated_text': 'The woman worked as a midwife, i know that'},
{'generated_text': 'The woman worked as a prostitute in a sex club'},
{'generated_text': 'The woman worked as a secretary for one of the'},
{'generated_text': 'The woman worked as a nurse, but she had'}]
```
This bias may also affect fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
The model developers also wrote in a [blog post](https://openai.com/blog/language-unsupervised/) about risks and limitations of the model, including:
> - **Compute Requirements:** Many previous approaches to NLP tasks train relatively small models on a single GPU from scratch. Our approach requires an expensive pre-training step - 1 month on 8 GPUs. Luckily, this only has to be done once and we’re releasing our model so others can avoid it. It is also a large model (in comparison to prior work) and consequently uses more compute and memory — we used a 37-layer (12 block) Transformer architecture, and we train on sequences of up to 512 tokens. Most experiments were conducted on 4 and 8 GPU systems. The model does fine-tune to new tasks very quickly which helps mitigate the additional resource requirements.
> - **The limits and bias of learning about the world through text:** Books and text readily available on the internet do not contain complete or even accurate information about the world. Recent work ([Lucy and Gauthier, 2017](https://arxiv.org/abs/1705.11168)) has shown that certain kinds of information are difficult to learn via just text and other work ([Gururangan et al., 2018](https://arxiv.org/abs/1803.02324)) has shown that models learn and exploit biases in data distributions.
> - **Still brittle generalization:** Although our approach improves performance across a broad range of tasks, current deep learning NLP models still exhibit surprising and counterintuitive behavior - especially when evaluated in a systematic, adversarial, or out-of-distribution way. Our approach is not immune to these issues, though we have observed some indications of progress. Our approach shows improved lexical robustness over previous purely neural approaches to textual entailment. On the dataset introduced in Glockner et al. (2018) our model achieves 83.75%, performing similarly to KIM, which incorporates external knowledge via WordNet.
## Training
#### Training Data
The model developers [write](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf):
> We use the BooksCorpus dataset ([Zhu et al., 2015](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Zhu_Aligning_Books_and_ICCV_2015_paper.pdf)) for training the language model. It contains over 7,000 unique unpublished books from a variety of genres including Adventure, Fantasy, and Romance. Crucially, it contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information.
#### Training Procedure
The model developers [write](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf):
> Our model largely follows the original transformer work [62]. We trained a 12-layer decoder-only transformer with masked self-attention heads (768 dimensional states and 12 attention heads). For the position-wise feed-forward networks, we used 3072 dimensional inner states. We used the Adam optimization scheme [27] with a max learning rate of 2.5e-4. The learning rate was increased linearly from zero over the first 2000 updates and annealed to 0 using a cosine schedule. We train for 100 epochs on minibatches of 64 randomly sampled, contiguous sequences of 512 tokens. Since layernorm [2] is used extensively throughout the model, a simple weight initialization of N (0, 0.02) was sufficient. We used a bytepair encoding (BPE) vocabulary with 40,000 merges [53] and residual, embedding, and attention dropouts with a rate of 0.1 for regularization. We also employed a modified version of L2 regularization proposed in [37], with w = 0.01 on all non bias or gain weights. For the activation function, we used the Gaussian Error Linear Unit (GELU) [18]. We used learned position embeddings instead of the sinusoidal version proposed in the original work. We use the ftfy library2 to clean the raw text in BooksCorpus, standardize some punctuation and whitespace, and use the spaCy tokenizer.
See the paper for further details and links to citations.
## Evaluation
The following evaluation information is extracted from the [associated blog post](https://openai.com/blog/language-unsupervised/). See the [associated paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) for further details.
#### Testing Data, Factors and Metrics
The model developers report that the model was evaluated on the following tasks and datasets using the listed metrics:
- **Task:** Textual Entailment
- **Datasets:** [SNLI](https://huggingface.co/datasets/snli), [MNLI Matched](https://huggingface.co/datasets/glue), [MNLI Mismatched](https://huggingface.co/datasets/glue), [SciTail](https://huggingface.co/datasets/scitail), [QNLI](https://huggingface.co/datasets/glue), [RTE](https://huggingface.co/datasets/glue)
- **Metrics:** Accuracy
- **Task:** Semantic Similarity
- **Datasets:** [STS-B](https://huggingface.co/datasets/glue), [QQP](https://huggingface.co/datasets/glue), [MRPC](https://huggingface.co/datasets/glue)
- **Metrics:** Accuracy
- **Task:** Reading Comprehension
- **Datasets:** [RACE](https://huggingface.co/datasets/race)
- **Metrics:** Accuracy
- **Task:** Commonsense Reasoning
- **Datasets:** [ROCStories](https://huggingface.co/datasets/story_cloze), [COPA](https://huggingface.co/datasets/xcopa)
- **Metrics:** Accuracy
- **Task:** Sentiment Analysis
- **Datasets:** [SST-2](https://huggingface.co/datasets/glue)
- **Metrics:** Accuracy
- **Task:** Linguistic Acceptability
- **Datasets:** [CoLA](https://huggingface.co/datasets/glue)
- **Metrics:** Accuracy
- **Task:** Multi Task Benchmark
- **Datasets:** [GLUE](https://huggingface.co/datasets/glue)
- **Metrics:** Accuracy
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Task | TE | TE | TE |TE | TE | TE | SS | SS | SS | RC | CR | CR | SA | LA | MTB |
|:--------:|:--:|:----------:|:-------------:|:-----:|:----:|:---:|:---:|:---:|:--:|:----:|:--------:|:----:|:----:|:----:|:----:|
| Dataset |SNLI|MNLI Matched|MNLI Mismatched|SciTail| QNLI | RTE |STS-B| QQP |MPRC|RACE |ROCStories|COPA | SST-2| CoLA | GLUE |
| |89.9| 82.1 | 81.4 |88.3 | 88.1 | 56.0|82.0 | 70.3|82.3|59.0 | 86.5 | 78.6 | 91.3 | 45.4 | 72.8 |
## Environmental Impact
The model developers [report that](https://openai.com/blog/language-unsupervised/):
> The total compute used to train this model was 0.96 petaflop days (pfs-days).
> 8 P600 GPU's * 30 days * 12 TFLOPS/GPU * 0.33 utilization = .96 pfs-days
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 8 P600 GPUs
- **Hours used:** 720 hours (30 days)
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@article{radford2018improving,
title={Improving language understanding by generative pre-training},
author={Radford, Alec and Narasimhan, Karthik and Salimans, Tim and Sutskever, Ilya and others},
year={2018},
publisher={OpenAI}
}
```
APA:
*Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.*
## Model Card Authors
This model card was written by the Hugging Face team. | {"language": "en", "license": "mit"} | openai-community/openai-gpt | null | [
"transformers",
"pytorch",
"tf",
"rust",
"safetensors",
"openai-gpt",
"text-generation",
"en",
"arxiv:1705.11168",
"arxiv:1803.02324",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1705.11168",
"1803.02324",
"1910.09700"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #rust #safetensors #openai-gpt #text-generation #en #arxiv-1705.11168 #arxiv-1803.02324 #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
| OpenAI GPT 1
============
Table of Contents
-----------------
* Model Details
* How To Get Started With the Model
* Uses
* Risks, Limitations and Biases
* Training
* Evaluation
* Environmental Impact
* Technical Specifications
* Citation Information
* Model Card Authors
Model Details
-------------
Model Description: 'openai-gpt' (a.k.a. "GPT-1") is the first transformer-based language model created and released by OpenAI. The model is a causal (unidirectional) transformer pre-trained using language modeling on a large corpus with long range dependencies.
* Developed by: Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever. See associated research paper and GitHub repo for model developers and contributors.
* Model Type: Transformer-based language model
* Language(s): English
* License: MIT License
* Related Models: GPT2, GPT2-Medium, GPT2-Large and GPT2-XL
* Resources for more information:
+ Research Paper
+ OpenAI Blog Post
+ GitHub Repo
+ Test the full generation capabilities here: URL
How to Get Started with the Model
---------------------------------
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
Here is how to use this model in PyTorch:
and in TensorFlow:
Uses
----
#### Direct Use
This model can be used for language modeling tasks.
#### Downstream Use
Potential downstream uses of this model include tasks that leverage language models. In the associated paper, the model developers discuss evaluations of the model for tasks including natural language inference (NLI), question answering, semantic similarity, and text classification.
#### Misuse and Out-of-scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
Risks, Limitations and Biases
-----------------------------
#### Biases
CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
Predictions generated by this model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
This bias may also affect fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
The model developers also wrote in a blog post about risks and limitations of the model, including:
>
> * Compute Requirements: Many previous approaches to NLP tasks train relatively small models on a single GPU from scratch. Our approach requires an expensive pre-training step - 1 month on 8 GPUs. Luckily, this only has to be done once and we’re releasing our model so others can avoid it. It is also a large model (in comparison to prior work) and consequently uses more compute and memory — we used a 37-layer (12 block) Transformer architecture, and we train on sequences of up to 512 tokens. Most experiments were conducted on 4 and 8 GPU systems. The model does fine-tune to new tasks very quickly which helps mitigate the additional resource requirements.
> * The limits and bias of learning about the world through text: Books and text readily available on the internet do not contain complete or even accurate information about the world. Recent work (Lucy and Gauthier, 2017) has shown that certain kinds of information are difficult to learn via just text and other work (Gururangan et al., 2018) has shown that models learn and exploit biases in data distributions.
> * Still brittle generalization: Although our approach improves performance across a broad range of tasks, current deep learning NLP models still exhibit surprising and counterintuitive behavior - especially when evaluated in a systematic, adversarial, or out-of-distribution way. Our approach is not immune to these issues, though we have observed some indications of progress. Our approach shows improved lexical robustness over previous purely neural approaches to textual entailment. On the dataset introduced in Glockner et al. (2018) our model achieves 83.75%, performing similarly to KIM, which incorporates external knowledge via WordNet.
>
>
>
Training
--------
#### Training Data
The model developers write:
>
> We use the BooksCorpus dataset (Zhu et al., 2015) for training the language model. It contains over 7,000 unique unpublished books from a variety of genres including Adventure, Fantasy, and Romance. Crucially, it contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information.
>
>
>
#### Training Procedure
The model developers write:
>
> Our model largely follows the original transformer work [62]. We trained a 12-layer decoder-only transformer with masked self-attention heads (768 dimensional states and 12 attention heads). For the position-wise feed-forward networks, we used 3072 dimensional inner states. We used the Adam optimization scheme [27] with a max learning rate of 2.5e-4. The learning rate was increased linearly from zero over the first 2000 updates and annealed to 0 using a cosine schedule. We train for 100 epochs on minibatches of 64 randomly sampled, contiguous sequences of 512 tokens. Since layernorm [2] is used extensively throughout the model, a simple weight initialization of N (0, 0.02) was sufficient. We used a bytepair encoding (BPE) vocabulary with 40,000 merges [53] and residual, embedding, and attention dropouts with a rate of 0.1 for regularization. We also employed a modified version of L2 regularization proposed in [37], with w = 0.01 on all non bias or gain weights. For the activation function, we used the Gaussian Error Linear Unit (GELU) [18]. We used learned position embeddings instead of the sinusoidal version proposed in the original work. We use the ftfy library2 to clean the raw text in BooksCorpus, standardize some punctuation and whitespace, and use the spaCy tokenizer.
>
>
>
See the paper for further details and links to citations.
Evaluation
----------
The following evaluation information is extracted from the associated blog post. See the associated paper for further details.
#### Testing Data, Factors and Metrics
The model developers report that the model was evaluated on the following tasks and datasets using the listed metrics:
* Task: Textual Entailment
+ Datasets: SNLI, MNLI Matched, MNLI Mismatched, SciTail, QNLI, RTE
+ Metrics: Accuracy
* Task: Semantic Similarity
+ Datasets: STS-B, QQP, MRPC
+ Metrics: Accuracy
* Task: Reading Comprehension
+ Datasets: RACE
+ Metrics: Accuracy
* Task: Commonsense Reasoning
+ Datasets: ROCStories, COPA
+ Metrics: Accuracy
* Task: Sentiment Analysis
+ Datasets: SST-2
+ Metrics: Accuracy
* Task: Linguistic Acceptability
+ Datasets: CoLA
+ Metrics: Accuracy
* Task: Multi Task Benchmark
+ Datasets: GLUE
+ Metrics: Accuracy
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
Environmental Impact
--------------------
The model developers report that:
>
> The total compute used to train this model was 0.96 petaflop days (pfs-days).
>
>
>
>
> 8 P600 GPU's \* 30 days \* 12 TFLOPS/GPU \* 0.33 utilization = .96 pfs-days
>
>
>
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
* Hardware Type: 8 P600 GPUs
* Hours used: 720 hours (30 days)
* Cloud Provider: Unknown
* Compute Region: Unknown
* Carbon Emitted: Unknown
Technical Specifications
------------------------
See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.
APA:
*Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.*
Model Card Authors
------------------
This model card was written by the Hugging Face team.
| [
"#### Direct Use\n\n\nThis model can be used for language modeling tasks.",
"#### Downstream Use\n\n\nPotential downstream uses of this model include tasks that leverage language models. In the associated paper, the model developers discuss evaluations of the model for tasks including natural language inference (NLI), question answering, semantic similarity, and text classification.",
"#### Misuse and Out-of-scope Use\n\n\nThe model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------",
"#### Biases\n\n\nCONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\nPredictions generated by this model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias may also affect fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"#### Risks and Limitations\n\n\nThe model developers also wrote in a blog post about risks and limitations of the model, including:\n\n\n\n> \n> * Compute Requirements: Many previous approaches to NLP tasks train relatively small models on a single GPU from scratch. Our approach requires an expensive pre-training step - 1 month on 8 GPUs. Luckily, this only has to be done once and we’re releasing our model so others can avoid it. It is also a large model (in comparison to prior work) and consequently uses more compute and memory — we used a 37-layer (12 block) Transformer architecture, and we train on sequences of up to 512 tokens. Most experiments were conducted on 4 and 8 GPU systems. The model does fine-tune to new tasks very quickly which helps mitigate the additional resource requirements.\n> * The limits and bias of learning about the world through text: Books and text readily available on the internet do not contain complete or even accurate information about the world. Recent work (Lucy and Gauthier, 2017) has shown that certain kinds of information are difficult to learn via just text and other work (Gururangan et al., 2018) has shown that models learn and exploit biases in data distributions.\n> * Still brittle generalization: Although our approach improves performance across a broad range of tasks, current deep learning NLP models still exhibit surprising and counterintuitive behavior - especially when evaluated in a systematic, adversarial, or out-of-distribution way. Our approach is not immune to these issues, though we have observed some indications of progress. Our approach shows improved lexical robustness over previous purely neural approaches to textual entailment. On the dataset introduced in Glockner et al. (2018) our model achieves 83.75%, performing similarly to KIM, which incorporates external knowledge via WordNet.\n> \n> \n> \n\n\nTraining\n--------",
"#### Training Data\n\n\nThe model developers write:\n\n\n\n> \n> We use the BooksCorpus dataset (Zhu et al., 2015) for training the language model. It contains over 7,000 unique unpublished books from a variety of genres including Adventure, Fantasy, and Romance. Crucially, it contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information.\n> \n> \n>",
"#### Training Procedure\n\n\nThe model developers write:\n\n\n\n> \n> Our model largely follows the original transformer work [62]. We trained a 12-layer decoder-only transformer with masked self-attention heads (768 dimensional states and 12 attention heads). For the position-wise feed-forward networks, we used 3072 dimensional inner states. We used the Adam optimization scheme [27] with a max learning rate of 2.5e-4. The learning rate was increased linearly from zero over the first 2000 updates and annealed to 0 using a cosine schedule. We train for 100 epochs on minibatches of 64 randomly sampled, contiguous sequences of 512 tokens. Since layernorm [2] is used extensively throughout the model, a simple weight initialization of N (0, 0.02) was sufficient. We used a bytepair encoding (BPE) vocabulary with 40,000 merges [53] and residual, embedding, and attention dropouts with a rate of 0.1 for regularization. We also employed a modified version of L2 regularization proposed in [37], with w = 0.01 on all non bias or gain weights. For the activation function, we used the Gaussian Error Linear Unit (GELU) [18]. We used learned position embeddings instead of the sinusoidal version proposed in the original work. We use the ftfy library2 to clean the raw text in BooksCorpus, standardize some punctuation and whitespace, and use the spaCy tokenizer.\n> \n> \n> \n\n\nSee the paper for further details and links to citations.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated blog post. See the associated paper for further details.",
"#### Testing Data, Factors and Metrics\n\n\nThe model developers report that the model was evaluated on the following tasks and datasets using the listed metrics:\n\n\n* Task: Textual Entailment\n\n\n\t+ Datasets: SNLI, MNLI Matched, MNLI Mismatched, SciTail, QNLI, RTE\n\t+ Metrics: Accuracy\n* Task: Semantic Similarity\n\n\n\t+ Datasets: STS-B, QQP, MRPC\n\t+ Metrics: Accuracy\n* Task: Reading Comprehension\n\n\n\t+ Datasets: RACE\n\t+ Metrics: Accuracy\n* Task: Commonsense Reasoning\n\n\n\t+ Datasets: ROCStories, COPA\n\t+ Metrics: Accuracy\n* Task: Sentiment Analysis\n\n\n\t+ Datasets: SST-2\n\t+ Metrics: Accuracy\n* Task: Linguistic Acceptability\n\n\n\t+ Datasets: CoLA\n\t+ Metrics: Accuracy\n* Task: Multi Task Benchmark\n\n\n\t+ Datasets: GLUE\n\t+ Metrics: Accuracy",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nThe model developers report that:\n\n\n\n> \n> The total compute used to train this model was 0.96 petaflop days (pfs-days).\n> \n> \n> \n\n\n\n> \n> 8 P600 GPU's \\* 30 days \\* 12 TFLOPS/GPU \\* 0.33 utilization = .96 pfs-days\n> \n> \n> \n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: 8 P600 GPUs\n* Hours used: 720 hours (30 days)\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nAPA:\n*Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.*\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] | [
"TAGS\n#transformers #pytorch #tf #rust #safetensors #openai-gpt #text-generation #en #arxiv-1705.11168 #arxiv-1803.02324 #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"#### Direct Use\n\n\nThis model can be used for language modeling tasks.",
"#### Downstream Use\n\n\nPotential downstream uses of this model include tasks that leverage language models. In the associated paper, the model developers discuss evaluations of the model for tasks including natural language inference (NLI), question answering, semantic similarity, and text classification.",
"#### Misuse and Out-of-scope Use\n\n\nThe model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------",
"#### Biases\n\n\nCONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\nPredictions generated by this model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nThis bias may also affect fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"#### Risks and Limitations\n\n\nThe model developers also wrote in a blog post about risks and limitations of the model, including:\n\n\n\n> \n> * Compute Requirements: Many previous approaches to NLP tasks train relatively small models on a single GPU from scratch. Our approach requires an expensive pre-training step - 1 month on 8 GPUs. Luckily, this only has to be done once and we’re releasing our model so others can avoid it. It is also a large model (in comparison to prior work) and consequently uses more compute and memory — we used a 37-layer (12 block) Transformer architecture, and we train on sequences of up to 512 tokens. Most experiments were conducted on 4 and 8 GPU systems. The model does fine-tune to new tasks very quickly which helps mitigate the additional resource requirements.\n> * The limits and bias of learning about the world through text: Books and text readily available on the internet do not contain complete or even accurate information about the world. Recent work (Lucy and Gauthier, 2017) has shown that certain kinds of information are difficult to learn via just text and other work (Gururangan et al., 2018) has shown that models learn and exploit biases in data distributions.\n> * Still brittle generalization: Although our approach improves performance across a broad range of tasks, current deep learning NLP models still exhibit surprising and counterintuitive behavior - especially when evaluated in a systematic, adversarial, or out-of-distribution way. Our approach is not immune to these issues, though we have observed some indications of progress. Our approach shows improved lexical robustness over previous purely neural approaches to textual entailment. On the dataset introduced in Glockner et al. (2018) our model achieves 83.75%, performing similarly to KIM, which incorporates external knowledge via WordNet.\n> \n> \n> \n\n\nTraining\n--------",
"#### Training Data\n\n\nThe model developers write:\n\n\n\n> \n> We use the BooksCorpus dataset (Zhu et al., 2015) for training the language model. It contains over 7,000 unique unpublished books from a variety of genres including Adventure, Fantasy, and Romance. Crucially, it contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information.\n> \n> \n>",
"#### Training Procedure\n\n\nThe model developers write:\n\n\n\n> \n> Our model largely follows the original transformer work [62]. We trained a 12-layer decoder-only transformer with masked self-attention heads (768 dimensional states and 12 attention heads). For the position-wise feed-forward networks, we used 3072 dimensional inner states. We used the Adam optimization scheme [27] with a max learning rate of 2.5e-4. The learning rate was increased linearly from zero over the first 2000 updates and annealed to 0 using a cosine schedule. We train for 100 epochs on minibatches of 64 randomly sampled, contiguous sequences of 512 tokens. Since layernorm [2] is used extensively throughout the model, a simple weight initialization of N (0, 0.02) was sufficient. We used a bytepair encoding (BPE) vocabulary with 40,000 merges [53] and residual, embedding, and attention dropouts with a rate of 0.1 for regularization. We also employed a modified version of L2 regularization proposed in [37], with w = 0.01 on all non bias or gain weights. For the activation function, we used the Gaussian Error Linear Unit (GELU) [18]. We used learned position embeddings instead of the sinusoidal version proposed in the original work. We use the ftfy library2 to clean the raw text in BooksCorpus, standardize some punctuation and whitespace, and use the spaCy tokenizer.\n> \n> \n> \n\n\nSee the paper for further details and links to citations.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated blog post. See the associated paper for further details.",
"#### Testing Data, Factors and Metrics\n\n\nThe model developers report that the model was evaluated on the following tasks and datasets using the listed metrics:\n\n\n* Task: Textual Entailment\n\n\n\t+ Datasets: SNLI, MNLI Matched, MNLI Mismatched, SciTail, QNLI, RTE\n\t+ Metrics: Accuracy\n* Task: Semantic Similarity\n\n\n\t+ Datasets: STS-B, QQP, MRPC\n\t+ Metrics: Accuracy\n* Task: Reading Comprehension\n\n\n\t+ Datasets: RACE\n\t+ Metrics: Accuracy\n* Task: Commonsense Reasoning\n\n\n\t+ Datasets: ROCStories, COPA\n\t+ Metrics: Accuracy\n* Task: Sentiment Analysis\n\n\n\t+ Datasets: SST-2\n\t+ Metrics: Accuracy\n* Task: Linguistic Acceptability\n\n\n\t+ Datasets: CoLA\n\t+ Metrics: Accuracy\n* Task: Multi Task Benchmark\n\n\n\t+ Datasets: GLUE\n\t+ Metrics: Accuracy",
"#### Results\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):\n\n\n\nEnvironmental Impact\n--------------------\n\n\nThe model developers report that:\n\n\n\n> \n> The total compute used to train this model was 0.96 petaflop days (pfs-days).\n> \n> \n> \n\n\n\n> \n> 8 P600 GPU's \\* 30 days \\* 12 TFLOPS/GPU \\* 0.33 utilization = .96 pfs-days\n> \n> \n> \n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: 8 P600 GPUs\n* Hours used: 720 hours (30 days)\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nAPA:\n*Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.*\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the Hugging Face team."
] |
text-classification | transformers |
# RoBERTa Base OpenAI Detector
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-author)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
**Model Description:** RoBERTa base OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa base model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the [largest GPT-2 model](https://huggingface.co/gpt2-xl), the 1.5B parameter version.
- **Developed by:** OpenAI, see [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector) and [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for full author list
- **Model Type:** Fine-tuned transformer-based language model
- **Language(s):** English
- **License:** MIT
- **Related Models:** [RoBERTa base](https://huggingface.co/roberta-base), [GPT-XL (1.5B parameter version)](https://huggingface.co/gpt2-xl), [GPT-Large (the 774M parameter version)](https://huggingface.co/gpt2-large), [GPT-Medium (the 355M parameter version)](https://huggingface.co/gpt2-medium) and [GPT-2 (the 124M parameter version)](https://huggingface.co/gpt2)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) (see, in particular, the section beginning on page 12 about Automated ML-based detection).
- [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector)
- [OpenAI Blog Post](https://openai.com/blog/gpt-2-1-5b-release/)
- [Explore the detector model here](https://huggingface.co/openai-detector )
## Uses
#### Direct Use
The model is a classifier that can be used to detect text generated by GPT-2 models. However, it is strongly suggested not to use it as a ChatGPT detector for the purposes of making grave allegations of academic misconduct against undergraduates and others, as this model might give inaccurate results in the case of ChatGPT-generated input.
#### Downstream Use
The model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further discussion.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
In their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research.
In a related [blog post](https://openai.com/blog/gpt-2-1-5b-release/), the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write:
> We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective.
The model developers also [report](https://openai.com/blog/gpt-2-1-5b-release/) finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness.
#### Bias
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by RoBERTa base and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the [RoBERTa base](https://huggingface.co/roberta-base) and [GPT-2 XL](https://huggingface.co/gpt2-xl) model cards for more information). The developers of this model discuss these issues further in their [paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf).
## Training
#### Training Data
The model is a sequence classifier based on RoBERTa base (see the [RoBERTa base model card](https://huggingface.co/roberta-base) for more details on the RoBERTa base training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available [here](https://github.com/openai/gpt-2-output-dataset)).
#### Training Procedure
The model developers write that:
> We based a sequence classifier on RoBERTaBASE (125 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model.
They later state:
> To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance.
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the training procedure.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf).
#### Testing Data, Factors and Metrics
The model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by:
> testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training.
#### Results
The model developers [find](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf):
> Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling ([Holtzman et al., 2019](https://arxiv.org/abs/1904.09751). Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling.
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), Figure 1 (on page 14) and Figure 2 (on page 16) for full results.
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
The model developers write that:
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the modeling architecture and training details.
## Citation Information
```bibtex
@article{solaiman2019release,
title={Release strategies and the social impacts of language models},
author={Solaiman, Irene and Brundage, Miles and Clark, Jack and Askell, Amanda and Herbert-Voss, Ariel and Wu, Jeff and Radford, Alec and Krueger, Gretchen and Kim, Jong Wook and Kreps, Sarah and others},
journal={arXiv preprint arXiv:1908.09203},
year={2019}
}
```
APA:
- Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., ... & Wang, J. (2019). Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203.
https://huggingface.co/papers/1908.09203
## Model Card Authors
This model card was written by the team at Hugging Face.
## How to Get Started with the Model
This model can be instantiated and run with a Transformers pipeline:
```python
from transformers import pipeline
pipe = pipeline("text-classification", model="roberta-base-openai-detector")
print(pipe("Hello world! Is this content AI-generated?")) # [{'label': 'Real', 'score': 0.8036582469940186}]
``` | {"language": "en", "license": "mit", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]} | openai-community/roberta-base-openai-detector | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"roberta",
"text-classification",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1904.09751",
"arxiv:1910.09700",
"arxiv:1908.09203",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1904.09751",
"1910.09700",
"1908.09203"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #roberta #text-classification #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1904.09751 #arxiv-1910.09700 #arxiv-1908.09203 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# RoBERTa Base OpenAI Detector
## Table of Contents
- Model Details
- Uses
- Risks, Limitations and Biases
- Training
- Evaluation
- Environmental Impact
- Technical Specifications
- Citation Information
- Model Card Authors
- How To Get Started With the Model
## Model Details
Model Description: RoBERTa base OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa base model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the largest GPT-2 model, the 1.5B parameter version.
- Developed by: OpenAI, see GitHub Repo and associated paper for full author list
- Model Type: Fine-tuned transformer-based language model
- Language(s): English
- License: MIT
- Related Models: RoBERTa base, GPT-XL (1.5B parameter version), GPT-Large (the 774M parameter version), GPT-Medium (the 355M parameter version) and GPT-2 (the 124M parameter version)
- Resources for more information:
- Research Paper (see, in particular, the section beginning on page 12 about Automated ML-based detection).
- GitHub Repo
- OpenAI Blog Post
- Explore the detector model here
## Uses
#### Direct Use
The model is a classifier that can be used to detect text generated by GPT-2 models. However, it is strongly suggested not to use it as a ChatGPT detector for the purposes of making grave allegations of academic misconduct against undergraduates and others, as this model might give inaccurate results in the case of ChatGPT-generated input.
#### Downstream Use
The model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the associated paper for further discussion.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their associated paper, suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model.
## Risks, Limitations and Biases
CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
In their associated paper, the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research.
In a related blog post, the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write:
> We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective.
The model developers also report finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness.
#### Bias
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by RoBERTa base and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the RoBERTa base and GPT-2 XL model cards for more information). The developers of this model discuss these issues further in their paper.
## Training
#### Training Data
The model is a sequence classifier based on RoBERTa base (see the RoBERTa base model card for more details on the RoBERTa base training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available here).
#### Training Procedure
The model developers write that:
> We based a sequence classifier on RoBERTaBASE (125 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model.
They later state:
> To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance.
See the associated paper for further details on the training procedure.
## Evaluation
The following evaluation information is extracted from the associated paper.
#### Testing Data, Factors and Metrics
The model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by:
> testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training.
#### Results
The model developers find:
> Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling (Holtzman et al., 2019. Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling.
See the associated paper, Figure 1 (on page 14) and Figure 2 (on page 16) for full results.
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: Unknown
- Hours used: Unknown
- Cloud Provider: Unknown
- Compute Region: Unknown
- Carbon Emitted: Unknown
## Technical Specifications
The model developers write that:
See the associated paper for further details on the modeling architecture and training details.
APA:
- Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., ... & Wang, J. (2019). Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203.
URL
## Model Card Authors
This model card was written by the team at Hugging Face.
## How to Get Started with the Model
This model can be instantiated and run with a Transformers pipeline:
| [
"# RoBERTa Base OpenAI Detector",
"## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- Technical Specifications\n- Citation Information\n- Model Card Authors\n- How To Get Started With the Model",
"## Model Details\n\nModel Description: RoBERTa base OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa base model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the largest GPT-2 model, the 1.5B parameter version. \n\n- Developed by: OpenAI, see GitHub Repo and associated paper for full author list\n- Model Type: Fine-tuned transformer-based language model\n- Language(s): English\n- License: MIT\n- Related Models: RoBERTa base, GPT-XL (1.5B parameter version), GPT-Large (the 774M parameter version), GPT-Medium (the 355M parameter version) and GPT-2 (the 124M parameter version)\n- Resources for more information:\n - Research Paper (see, in particular, the section beginning on page 12 about Automated ML-based detection).\n - GitHub Repo\n - OpenAI Blog Post\n - Explore the detector model here",
"## Uses",
"#### Direct Use\n\nThe model is a classifier that can be used to detect text generated by GPT-2 models. However, it is strongly suggested not to use it as a ChatGPT detector for the purposes of making grave allegations of academic misconduct against undergraduates and others, as this model might give inaccurate results in the case of ChatGPT-generated input.",
"#### Downstream Use\n\nThe model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the associated paper for further discussion.",
"#### Misuse and Out-of-scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their associated paper, suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model.",
"## Risks, Limitations and Biases\n\nCONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"#### Risks and Limitations\n\nIn their associated paper, the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research. \n\nIn a related blog post, the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write: \n\n> We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective. \n\nThe model developers also report finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness.",
"#### Bias\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by RoBERTa base and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the RoBERTa base and GPT-2 XL model cards for more information). The developers of this model discuss these issues further in their paper.",
"## Training",
"#### Training Data\n\nThe model is a sequence classifier based on RoBERTa base (see the RoBERTa base model card for more details on the RoBERTa base training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available here).",
"#### Training Procedure\n\nThe model developers write that: \n\n> We based a sequence classifier on RoBERTaBASE (125 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model.\n\nThey later state: \n\n> To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance.\n\nSee the associated paper for further details on the training procedure.",
"## Evaluation\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\nThe model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by: \n\n> testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training.",
"#### Results\n\nThe model developers find: \n\n> Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling (Holtzman et al., 2019. Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling. \t\n\nSee the associated paper, Figure 1 (on page 14) and Figure 2 (on page 16) for full results.",
"## Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: Unknown\n- Hours used: Unknown\n- Cloud Provider: Unknown\n- Compute Region: Unknown\n- Carbon Emitted: Unknown",
"## Technical Specifications\n\nThe model developers write that: \n\nSee the associated paper for further details on the modeling architecture and training details.\n\n\n\n\n\nAPA: \n- Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., ... & Wang, J. (2019). Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203.\n\nURL",
"## Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"## How to Get Started with the Model \n\nThis model can be instantiated and run with a Transformers pipeline:"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #roberta #text-classification #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1904.09751 #arxiv-1910.09700 #arxiv-1908.09203 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# RoBERTa Base OpenAI Detector",
"## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- Technical Specifications\n- Citation Information\n- Model Card Authors\n- How To Get Started With the Model",
"## Model Details\n\nModel Description: RoBERTa base OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa base model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the largest GPT-2 model, the 1.5B parameter version. \n\n- Developed by: OpenAI, see GitHub Repo and associated paper for full author list\n- Model Type: Fine-tuned transformer-based language model\n- Language(s): English\n- License: MIT\n- Related Models: RoBERTa base, GPT-XL (1.5B parameter version), GPT-Large (the 774M parameter version), GPT-Medium (the 355M parameter version) and GPT-2 (the 124M parameter version)\n- Resources for more information:\n - Research Paper (see, in particular, the section beginning on page 12 about Automated ML-based detection).\n - GitHub Repo\n - OpenAI Blog Post\n - Explore the detector model here",
"## Uses",
"#### Direct Use\n\nThe model is a classifier that can be used to detect text generated by GPT-2 models. However, it is strongly suggested not to use it as a ChatGPT detector for the purposes of making grave allegations of academic misconduct against undergraduates and others, as this model might give inaccurate results in the case of ChatGPT-generated input.",
"#### Downstream Use\n\nThe model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the associated paper for further discussion.",
"#### Misuse and Out-of-scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their associated paper, suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model.",
"## Risks, Limitations and Biases\n\nCONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"#### Risks and Limitations\n\nIn their associated paper, the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research. \n\nIn a related blog post, the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write: \n\n> We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective. \n\nThe model developers also report finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness.",
"#### Bias\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by RoBERTa base and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the RoBERTa base and GPT-2 XL model cards for more information). The developers of this model discuss these issues further in their paper.",
"## Training",
"#### Training Data\n\nThe model is a sequence classifier based on RoBERTa base (see the RoBERTa base model card for more details on the RoBERTa base training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available here).",
"#### Training Procedure\n\nThe model developers write that: \n\n> We based a sequence classifier on RoBERTaBASE (125 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model.\n\nThey later state: \n\n> To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance.\n\nSee the associated paper for further details on the training procedure.",
"## Evaluation\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\nThe model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by: \n\n> testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training.",
"#### Results\n\nThe model developers find: \n\n> Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling (Holtzman et al., 2019. Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling. \t\n\nSee the associated paper, Figure 1 (on page 14) and Figure 2 (on page 16) for full results.",
"## Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: Unknown\n- Hours used: Unknown\n- Cloud Provider: Unknown\n- Compute Region: Unknown\n- Carbon Emitted: Unknown",
"## Technical Specifications\n\nThe model developers write that: \n\nSee the associated paper for further details on the modeling architecture and training details.\n\n\n\n\n\nAPA: \n- Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., ... & Wang, J. (2019). Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203.\n\nURL",
"## Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"## How to Get Started with the Model \n\nThis model can be instantiated and run with a Transformers pipeline:"
] |
fill-mask | transformers |
# RoBERTa base model
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it
makes a difference between english and English.
Disclaimer: The team releasing RoBERTa did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=roberta) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at a model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='roberta-base')
>>> unmasker("Hello I'm a <mask> model.")
[{'sequence': "<s>Hello I'm a male model.</s>",
'score': 0.3306540250778198,
'token': 2943,
'token_str': 'Ġmale'},
{'sequence': "<s>Hello I'm a female model.</s>",
'score': 0.04655390977859497,
'token': 2182,
'token_str': 'Ġfemale'},
{'sequence': "<s>Hello I'm a professional model.</s>",
'score': 0.04232972860336304,
'token': 2038,
'token_str': 'Ġprofessional'},
{'sequence': "<s>Hello I'm a fashion model.</s>",
'score': 0.037216778844594955,
'token': 2734,
'token_str': 'Ġfashion'},
{'sequence': "<s>Hello I'm a Russian model.</s>",
'score': 0.03253649175167084,
'token': 1083,
'token_str': 'ĠRussian'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = RobertaModel.from_pretrained('roberta-base')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = TFRobertaModel.from_pretrained('roberta-base')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='roberta-base')
>>> unmasker("The man worked as a <mask>.")
[{'sequence': '<s>The man worked as a mechanic.</s>',
'score': 0.08702439814805984,
'token': 25682,
'token_str': 'Ġmechanic'},
{'sequence': '<s>The man worked as a waiter.</s>',
'score': 0.0819653645157814,
'token': 38233,
'token_str': 'Ġwaiter'},
{'sequence': '<s>The man worked as a butcher.</s>',
'score': 0.073323555290699,
'token': 32364,
'token_str': 'Ġbutcher'},
{'sequence': '<s>The man worked as a miner.</s>',
'score': 0.046322137117385864,
'token': 18678,
'token_str': 'Ġminer'},
{'sequence': '<s>The man worked as a guard.</s>',
'score': 0.040150221437215805,
'token': 2510,
'token_str': 'Ġguard'}]
>>> unmasker("The Black woman worked as a <mask>.")
[{'sequence': '<s>The Black woman worked as a waitress.</s>',
'score': 0.22177888453006744,
'token': 35698,
'token_str': 'Ġwaitress'},
{'sequence': '<s>The Black woman worked as a prostitute.</s>',
'score': 0.19288744032382965,
'token': 36289,
'token_str': 'Ġprostitute'},
{'sequence': '<s>The Black woman worked as a maid.</s>',
'score': 0.06498628109693527,
'token': 29754,
'token_str': 'Ġmaid'},
{'sequence': '<s>The Black woman worked as a secretary.</s>',
'score': 0.05375480651855469,
'token': 2971,
'token_str': 'Ġsecretary'},
{'sequence': '<s>The Black woman worked as a nurse.</s>',
'score': 0.05245552211999893,
'token': 9008,
'token_str': 'Ġnurse'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The RoBERTa model was pretrained on the reunion of five datasets:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books;
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers) ;
- [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/), a dataset containing 63 millions English news
articles crawled between September 2016 and February 2019.
- [OpenWebText](https://github.com/jcpeterson/openwebtext), an opensource recreation of the WebText dataset used to
train GPT-2,
- [Stories](https://arxiv.org/abs/1806.02847) a dataset containing a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas.
Together these datasets weigh 160GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of
the model take pieces of 512 contiguous tokens that may span over documents. The beginning of a new document is marked
with `<s>` and the end of one by `</s>`
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
### Pretraining
The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The
optimizer used is Adam with a learning rate of 6e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and
\\(\epsilon = 1e-6\\), a weight decay of 0.01, learning rate warmup for 24,000 steps and linear decay of the learning
rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
| | 87.6 | 91.9 | 92.8 | 94.8 | 63.6 | 91.2 | 90.2 | 78.7 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1907-11692,
author = {Yinhan Liu and
Myle Ott and
Naman Goyal and
Jingfei Du and
Mandar Joshi and
Danqi Chen and
Omer Levy and
Mike Lewis and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach},
journal = {CoRR},
volume = {abs/1907.11692},
year = {2019},
url = {http://arxiv.org/abs/1907.11692},
archivePrefix = {arXiv},
eprint = {1907.11692},
timestamp = {Thu, 01 Aug 2019 08:59:33 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=roberta-base">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "mit", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]} | FacebookAI/roberta-base | null | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"roberta",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1907.11692",
"arxiv:1806.02847",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1907.11692",
"1806.02847"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #rust #safetensors #roberta #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1907.11692 #arxiv-1806.02847 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
| RoBERTa base model
==================
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is case-sensitive: it
makes a difference between english and English.
Disclaimer: The team releasing RoBERTa did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
Intended uses & limitations
---------------------------
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the model hub to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at a model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The RoBERTa model was pretrained on the reunion of five datasets:
* BookCorpus, a dataset consisting of 11,038 unpublished books;
* English Wikipedia (excluding lists, tables and headers) ;
* CC-News, a dataset containing 63 millions English news
articles crawled between September 2016 and February 2019.
* OpenWebText, an opensource recreation of the WebText dataset used to
train GPT-2,
* Stories a dataset containing a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas.
Together these datasets weigh 160GB of text.
Training procedure
------------------
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of
the model take pieces of 512 contiguous tokens that may span over documents. The beginning of a new document is marked
with '~~' and the end of one by '~~'
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by ''.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
### Pretraining
The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The
optimizer used is Adam with a learning rate of 6e-4, \(\beta\_{1} = 0.9\), \(\beta\_{2} = 0.98\) and
\(\epsilon = 1e-6\), a weight decay of 0.01, learning rate warmup for 24,000 steps and linear decay of the learning
rate after.
Evaluation results
------------------
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from\nneutral. Therefore, the model can have biased predictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe RoBERTa model was pretrained on the reunion of five datasets:\n\n\n* BookCorpus, a dataset consisting of 11,038 unpublished books;\n* English Wikipedia (excluding lists, tables and headers) ;\n* CC-News, a dataset containing 63 millions English news\narticles crawled between September 2016 and February 2019.\n* OpenWebText, an opensource recreation of the WebText dataset used to\ntrain GPT-2,\n* Stories a dataset containing a subset of CommonCrawl data filtered to match the\nstory-like style of Winograd schemas.\n\n\nTogether these datasets weigh 160GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of\nthe model take pieces of 512 contiguous tokens that may span over documents. The beginning of a new document is marked\nwith '~~' and the end of one by '~~'\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by ''.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nContrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).",
"### Pretraining\n\n\nThe model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The\noptimizer used is Adam with a learning rate of 6e-4, \\(\\beta\\_{1} = 0.9\\), \\(\\beta\\_{2} = 0.98\\) and\n\\(\\epsilon = 1e-6\\), a weight decay of 0.01, learning rate warmup for 24,000 steps and linear decay of the learning\nrate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nGlue test results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #roberta #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1907.11692 #arxiv-1806.02847 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from\nneutral. Therefore, the model can have biased predictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe RoBERTa model was pretrained on the reunion of five datasets:\n\n\n* BookCorpus, a dataset consisting of 11,038 unpublished books;\n* English Wikipedia (excluding lists, tables and headers) ;\n* CC-News, a dataset containing 63 millions English news\narticles crawled between September 2016 and February 2019.\n* OpenWebText, an opensource recreation of the WebText dataset used to\ntrain GPT-2,\n* Stories a dataset containing a subset of CommonCrawl data filtered to match the\nstory-like style of Winograd schemas.\n\n\nTogether these datasets weigh 160GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of\nthe model take pieces of 512 contiguous tokens that may span over documents. The beginning of a new document is marked\nwith '~~' and the end of one by '~~'\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by ''.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nContrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).",
"### Pretraining\n\n\nThe model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The\noptimizer used is Adam with a learning rate of 6e-4, \\(\\beta\\_{1} = 0.9\\), \\(\\beta\\_{2} = 0.98\\) and\n\\(\\epsilon = 1e-6\\), a weight decay of 0.01, learning rate warmup for 24,000 steps and linear decay of the learning\nrate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nGlue test results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] |
text-classification | transformers |
# roberta-large-mnli
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation-results)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-author)
## Model Details
**Model Description:** roberta-large-mnli is the [RoBERTa large model](https://huggingface.co/roberta-large) fine-tuned on the [Multi-Genre Natural Language Inference (MNLI)](https://huggingface.co/datasets/multi_nli) corpus. The model is a pretrained model on English language text using a masked language modeling (MLM) objective.
- **Developed by:** See [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta) for model developers
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** MIT
- **Parent Model:** This model is a fine-tuned version of the RoBERTa large model. Users should see the [RoBERTa large model card](https://huggingface.co/roberta-large) for relevant information.
- **Resources for more information:**
- [Research Paper](https://arxiv.org/abs/1907.11692)
- [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta)
## How to Get Started with the Model
Use the code below to get started with the model. The model can be loaded with the zero-shot-classification pipeline like so:
```python
from transformers import pipeline
classifier = pipeline('zero-shot-classification', model='roberta-large-mnli')
```
You can then use this pipeline to classify sequences into any of the class names you specify. For example:
```python
sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels)
```
## Uses
#### Direct Use
This fine-tuned model can be used for zero-shot classification tasks, including zero-shot sentence-pair classification (see the [GitHub repo](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta) for examples) and zero-shot sequence classification.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The [RoBERTa large model card](https://huggingface.co/roberta-large) notes that: "The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral."
Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
sequence_to_classify = "The CEO had a strong handshake."
candidate_labels = ['male', 'female']
hypothesis_template = "This text speaks about a {} profession."
classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template)
```
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
This model was fine-tuned on the [Multi-Genre Natural Language Inference (MNLI)](https://cims.nyu.edu/~sbowman/multinli/) corpus. Also see the [MNLI data card](https://huggingface.co/datasets/multi_nli) for more information.
As described in the [RoBERTa large model card](https://huggingface.co/roberta-large):
> The RoBERTa model was pretrained on the reunion of five datasets:
>
> - [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books;
> - [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers) ;
> - [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/), a dataset containing 63 millions English news articles crawled between September 2016 and February 2019.
> - [OpenWebText](https://github.com/jcpeterson/openwebtext), an opensource recreation of the WebText dataset used to train GPT-2,
> - [Stories](https://arxiv.org/abs/1806.02847), a dataset containing a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas.
>
> Together theses datasets weight 160GB of text.
Also see the [bookcorpus data card](https://huggingface.co/datasets/bookcorpus) and the [wikipedia data card](https://huggingface.co/datasets/wikipedia) for additional information.
#### Training Procedure
##### Preprocessing
As described in the [RoBERTa large model card](https://huggingface.co/roberta-large):
> The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of
> the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
> with `<s>` and the end of one by `</s>`
>
> The details of the masking procedure for each sentence are the following:
> - 15% of the tokens are masked.
> - In 80% of the cases, the masked tokens are replaced by `<mask>`.
> - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
> - In the 10% remaining cases, the masked tokens are left as is.
>
> Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
##### Pretraining
Also as described in the [RoBERTa large model card](https://huggingface.co/roberta-large):
> The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The
> optimizer used is Adam with a learning rate of 4e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and
> \\(\epsilon = 1e-6\\), a weight decay of 0.01, learning rate warmup for 30,000 steps and linear decay of the learning
> rate after.
## Evaluation
The following evaluation information is extracted from the associated [GitHub repo for RoBERTa](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta).
#### Testing Data, Factors and Metrics
The model developers report that the model was evaluated on the following tasks and datasets using the listed metrics:
- **Dataset:** Part of [GLUE (Wang et al., 2019)](https://arxiv.org/pdf/1804.07461.pdf), the General Language Understanding Evaluation benchmark, a collection of 9 datasets for evaluating natural language understanding systems. Specifically, the model was evaluated on the [Multi-Genre Natural Language Inference (MNLI)](https://cims.nyu.edu/~sbowman/multinli/) corpus. See the [GLUE data card](https://huggingface.co/datasets/glue) or [Wang et al. (2019)](https://arxiv.org/pdf/1804.07461.pdf) for further information.
- **Tasks:** NLI. [Wang et al. (2019)](https://arxiv.org/pdf/1804.07461.pdf) describe the inference task for MNLI as:
> The Multi-Genre Natural Language Inference Corpus [(Williams et al., 2018)](https://arxiv.org/abs/1704.05426) is a crowd-sourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. We use the standard test set, for which we obtained private labels from the authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) sections. We also use and recommend the SNLI corpus [(Bowman et al., 2015)](https://arxiv.org/abs/1508.05326) as 550k examples of auxiliary training data.
- **Metrics:** Accuracy
- **Dataset:** [XNLI (Conneau et al., 2018)](https://arxiv.org/pdf/1809.05053.pdf), the extension of the [Multi-Genre Natural Language Inference (MNLI)](https://cims.nyu.edu/~sbowman/multinli/) corpus to 15 languages: English, French, Spanish, German, Greek, Bulgarian, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, Hindi, Swahili and Urdu. See the [XNLI data card](https://huggingface.co/datasets/xnli) or [Conneau et al. (2018)](https://arxiv.org/pdf/1809.05053.pdf) for further information.
- **Tasks:** Translate-test (e.g., the model is used to translate input sentences in other languages to the training language)
- **Metrics:** Accuracy
#### Results
GLUE test results (dev set, single model, single-task fine-tuning): 90.2 on MNLI
XNLI test results:
| Task | en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur |
|:----:|:--:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| |91.3|82.91|84.27|81.24|81.74|83.13|78.28|76.79|76.64|74.17|74.05| 77.5| 70.9|66.65|66.81|
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and hours used based on the [associated paper](https://arxiv.org/pdf/1907.11692.pdf).
- **Hardware Type:** 1024 V100 GPUs
- **Hours used:** 24 hours (one day)
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://arxiv.org/pdf/1907.11692.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@article{liu2019roberta,
title = {RoBERTa: A Robustly Optimized BERT Pretraining Approach},
author = {Yinhan Liu and Myle Ott and Naman Goyal and Jingfei Du and
Mandar Joshi and Danqi Chen and Omer Levy and Mike Lewis and
Luke Zettlemoyer and Veselin Stoyanov},
journal={arXiv preprint arXiv:1907.11692},
year = {2019},
}
``` | {"language": ["en"], "license": "mit", "tags": ["autogenerated-modelcard"], "datasets": ["multi_nli", "wikipedia", "bookcorpus"]} | FacebookAI/roberta-large-mnli | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"roberta",
"text-classification",
"autogenerated-modelcard",
"en",
"dataset:multi_nli",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:1806.02847",
"arxiv:1804.07461",
"arxiv:1704.05426",
"arxiv:1508.05326",
"arxiv:1809.05053",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1907.11692",
"1806.02847",
"1804.07461",
"1704.05426",
"1508.05326",
"1809.05053",
"1910.09700"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #roberta #text-classification #autogenerated-modelcard #en #dataset-multi_nli #dataset-wikipedia #dataset-bookcorpus #arxiv-1907.11692 #arxiv-1806.02847 #arxiv-1804.07461 #arxiv-1704.05426 #arxiv-1508.05326 #arxiv-1809.05053 #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
| roberta-large-mnli
==================
Table of Contents
-----------------
* Model Details
* How To Get Started With the Model
* Uses
* Risks, Limitations and Biases
* Training
* Evaluation
* Environmental Impact
* Technical Specifications
* Citation Information
* Model Card Authors
Model Details
-------------
Model Description: roberta-large-mnli is the RoBERTa large model fine-tuned on the Multi-Genre Natural Language Inference (MNLI) corpus. The model is a pretrained model on English language text using a masked language modeling (MLM) objective.
* Developed by: See GitHub Repo for model developers
* Model Type: Transformer-based language model
* Language(s): English
* License: MIT
* Parent Model: This model is a fine-tuned version of the RoBERTa large model. Users should see the RoBERTa large model card for relevant information.
* Resources for more information:
+ Research Paper
+ GitHub Repo
How to Get Started with the Model
---------------------------------
Use the code below to get started with the model. The model can be loaded with the zero-shot-classification pipeline like so:
You can then use this pipeline to classify sequences into any of the class names you specify. For example:
Uses
----
#### Direct Use
This fine-tuned model can be used for zero-shot classification tasks, including zero-shot sentence-pair classification (see the GitHub repo for examples) and zero-shot sequence classification.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
Risks, Limitations and Biases
-----------------------------
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). The RoBERTa large model card notes that: "The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral."
Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Training
--------
#### Training Data
This model was fine-tuned on the Multi-Genre Natural Language Inference (MNLI) corpus. Also see the MNLI data card for more information.
As described in the RoBERTa large model card:
>
> The RoBERTa model was pretrained on the reunion of five datasets:
>
>
> * BookCorpus, a dataset consisting of 11,038 unpublished books;
> * English Wikipedia (excluding lists, tables and headers) ;
> * CC-News, a dataset containing 63 millions English news articles crawled between September 2016 and February 2019.
> * OpenWebText, an opensource recreation of the WebText dataset used to train GPT-2,
> * Stories, a dataset containing a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas.
>
>
> Together theses datasets weight 160GB of text.
>
>
>
Also see the bookcorpus data card and the wikipedia data card for additional information.
#### Training Procedure
##### Preprocessing
As described in the RoBERTa large model card:
>
> The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of
> the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
> with '~~' and the end of one by '~~'
>
>
> The details of the masking procedure for each sentence are the following:
>
>
> * 15% of the tokens are masked.
> * In 80% of the cases, the masked tokens are replaced by ''.
> * In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
> * In the 10% remaining cases, the masked tokens are left as is.
>
>
> Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
>
>
>
##### Pretraining
Also as described in the RoBERTa large model card:
>
> The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The
> optimizer used is Adam with a learning rate of 4e-4, \(\beta\_{1} = 0.9\), \(\beta\_{2} = 0.98\) and
> \(\epsilon = 1e-6\), a weight decay of 0.01, learning rate warmup for 30,000 steps and linear decay of the learning
> rate after.
>
>
>
Evaluation
----------
The following evaluation information is extracted from the associated GitHub repo for RoBERTa.
#### Testing Data, Factors and Metrics
The model developers report that the model was evaluated on the following tasks and datasets using the listed metrics:
* Dataset: Part of GLUE (Wang et al., 2019), the General Language Understanding Evaluation benchmark, a collection of 9 datasets for evaluating natural language understanding systems. Specifically, the model was evaluated on the Multi-Genre Natural Language Inference (MNLI) corpus. See the GLUE data card or Wang et al. (2019) for further information.
+ Tasks: NLI. Wang et al. (2019) describe the inference task for MNLI as:
>
> The Multi-Genre Natural Language Inference Corpus (Williams et al., 2018) is a crowd-sourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. We use the standard test set, for which we obtained private labels from the authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) sections. We also use and recommend the SNLI corpus (Bowman et al., 2015) as 550k examples of auxiliary training data.
>
>
>
+ Metrics: Accuracy
* Dataset: XNLI (Conneau et al., 2018), the extension of the Multi-Genre Natural Language Inference (MNLI) corpus to 15 languages: English, French, Spanish, German, Greek, Bulgarian, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, Hindi, Swahili and Urdu. See the XNLI data card or Conneau et al. (2018) for further information.
+ Tasks: Translate-test (e.g., the model is used to translate input sentences in other languages to the training language)
+ Metrics: Accuracy
#### Results
GLUE test results (dev set, single model, single-task fine-tuning): 90.2 on MNLI
XNLI test results:
Environmental Impact
--------------------
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and hours used based on the associated paper.
* Hardware Type: 1024 V100 GPUs
* Hours used: 24 hours (one day)
* Cloud Provider: Unknown
* Compute Region: Unknown
* Carbon Emitted: Unknown
Technical Specifications
------------------------
See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.
| [
"#### Direct Use\n\n\nThis fine-tuned model can be used for zero-shot classification tasks, including zero-shot sentence-pair classification (see the GitHub repo for examples) and zero-shot sequence classification.",
"#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). The RoBERTa large model card notes that: \"The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral.\"\n\n\nPredictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThis model was fine-tuned on the Multi-Genre Natural Language Inference (MNLI) corpus. Also see the MNLI data card for more information.\n\n\nAs described in the RoBERTa large model card:\n\n\n\n> \n> The RoBERTa model was pretrained on the reunion of five datasets:\n> \n> \n> * BookCorpus, a dataset consisting of 11,038 unpublished books;\n> * English Wikipedia (excluding lists, tables and headers) ;\n> * CC-News, a dataset containing 63 millions English news articles crawled between September 2016 and February 2019.\n> * OpenWebText, an opensource recreation of the WebText dataset used to train GPT-2,\n> * Stories, a dataset containing a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas.\n> \n> \n> Together theses datasets weight 160GB of text.\n> \n> \n> \n\n\nAlso see the bookcorpus data card and the wikipedia data card for additional information.",
"#### Training Procedure",
"##### Preprocessing\n\n\nAs described in the RoBERTa large model card:\n\n\n\n> \n> The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of\n> the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked\n> with '~~' and the end of one by '~~'\n> \n> \n> The details of the masking procedure for each sentence are the following:\n> \n> \n> * 15% of the tokens are masked.\n> * In 80% of the cases, the masked tokens are replaced by ''.\n> * In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n> * In the 10% remaining cases, the masked tokens are left as is.\n> \n> \n> Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).\n> \n> \n>",
"##### Pretraining\n\n\nAlso as described in the RoBERTa large model card:\n\n\n\n> \n> The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The\n> optimizer used is Adam with a learning rate of 4e-4, \\(\\beta\\_{1} = 0.9\\), \\(\\beta\\_{2} = 0.98\\) and\n> \\(\\epsilon = 1e-6\\), a weight decay of 0.01, learning rate warmup for 30,000 steps and linear decay of the learning\n> rate after.\n> \n> \n> \n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated GitHub repo for RoBERTa.",
"#### Testing Data, Factors and Metrics\n\n\nThe model developers report that the model was evaluated on the following tasks and datasets using the listed metrics:\n\n\n* Dataset: Part of GLUE (Wang et al., 2019), the General Language Understanding Evaluation benchmark, a collection of 9 datasets for evaluating natural language understanding systems. Specifically, the model was evaluated on the Multi-Genre Natural Language Inference (MNLI) corpus. See the GLUE data card or Wang et al. (2019) for further information.\n\n\n\t+ Tasks: NLI. Wang et al. (2019) describe the inference task for MNLI as:\n> \n> The Multi-Genre Natural Language Inference Corpus (Williams et al., 2018) is a crowd-sourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. We use the standard test set, for which we obtained private labels from the authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) sections. We also use and recommend the SNLI corpus (Bowman et al., 2015) as 550k examples of auxiliary training data.\n> \n> \n> \n\n\n\t+ Metrics: Accuracy\n* Dataset: XNLI (Conneau et al., 2018), the extension of the Multi-Genre Natural Language Inference (MNLI) corpus to 15 languages: English, French, Spanish, German, Greek, Bulgarian, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, Hindi, Swahili and Urdu. See the XNLI data card or Conneau et al. (2018) for further information.\n\n\n\t+ Tasks: Translate-test (e.g., the model is used to translate input sentences in other languages to the training language)\n\t+ Metrics: Accuracy",
"#### Results\n\n\nGLUE test results (dev set, single model, single-task fine-tuning): 90.2 on MNLI\n\n\nXNLI test results:\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and hours used based on the associated paper.\n\n\n* Hardware Type: 1024 V100 GPUs\n* Hours used: 24 hours (one day)\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details."
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #roberta #text-classification #autogenerated-modelcard #en #dataset-multi_nli #dataset-wikipedia #dataset-bookcorpus #arxiv-1907.11692 #arxiv-1806.02847 #arxiv-1804.07461 #arxiv-1704.05426 #arxiv-1508.05326 #arxiv-1809.05053 #arxiv-1910.09700 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"#### Direct Use\n\n\nThis fine-tuned model can be used for zero-shot classification tasks, including zero-shot sentence-pair classification (see the GitHub repo for examples) and zero-shot sequence classification.",
"#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). The RoBERTa large model card notes that: \"The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral.\"\n\n\nPredictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThis model was fine-tuned on the Multi-Genre Natural Language Inference (MNLI) corpus. Also see the MNLI data card for more information.\n\n\nAs described in the RoBERTa large model card:\n\n\n\n> \n> The RoBERTa model was pretrained on the reunion of five datasets:\n> \n> \n> * BookCorpus, a dataset consisting of 11,038 unpublished books;\n> * English Wikipedia (excluding lists, tables and headers) ;\n> * CC-News, a dataset containing 63 millions English news articles crawled between September 2016 and February 2019.\n> * OpenWebText, an opensource recreation of the WebText dataset used to train GPT-2,\n> * Stories, a dataset containing a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas.\n> \n> \n> Together theses datasets weight 160GB of text.\n> \n> \n> \n\n\nAlso see the bookcorpus data card and the wikipedia data card for additional information.",
"#### Training Procedure",
"##### Preprocessing\n\n\nAs described in the RoBERTa large model card:\n\n\n\n> \n> The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of\n> the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked\n> with '~~' and the end of one by '~~'\n> \n> \n> The details of the masking procedure for each sentence are the following:\n> \n> \n> * 15% of the tokens are masked.\n> * In 80% of the cases, the masked tokens are replaced by ''.\n> * In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n> * In the 10% remaining cases, the masked tokens are left as is.\n> \n> \n> Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).\n> \n> \n>",
"##### Pretraining\n\n\nAlso as described in the RoBERTa large model card:\n\n\n\n> \n> The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The\n> optimizer used is Adam with a learning rate of 4e-4, \\(\\beta\\_{1} = 0.9\\), \\(\\beta\\_{2} = 0.98\\) and\n> \\(\\epsilon = 1e-6\\), a weight decay of 0.01, learning rate warmup for 30,000 steps and linear decay of the learning\n> rate after.\n> \n> \n> \n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated GitHub repo for RoBERTa.",
"#### Testing Data, Factors and Metrics\n\n\nThe model developers report that the model was evaluated on the following tasks and datasets using the listed metrics:\n\n\n* Dataset: Part of GLUE (Wang et al., 2019), the General Language Understanding Evaluation benchmark, a collection of 9 datasets for evaluating natural language understanding systems. Specifically, the model was evaluated on the Multi-Genre Natural Language Inference (MNLI) corpus. See the GLUE data card or Wang et al. (2019) for further information.\n\n\n\t+ Tasks: NLI. Wang et al. (2019) describe the inference task for MNLI as:\n> \n> The Multi-Genre Natural Language Inference Corpus (Williams et al., 2018) is a crowd-sourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. We use the standard test set, for which we obtained private labels from the authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) sections. We also use and recommend the SNLI corpus (Bowman et al., 2015) as 550k examples of auxiliary training data.\n> \n> \n> \n\n\n\t+ Metrics: Accuracy\n* Dataset: XNLI (Conneau et al., 2018), the extension of the Multi-Genre Natural Language Inference (MNLI) corpus to 15 languages: English, French, Spanish, German, Greek, Bulgarian, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, Hindi, Swahili and Urdu. See the XNLI data card or Conneau et al. (2018) for further information.\n\n\n\t+ Tasks: Translate-test (e.g., the model is used to translate input sentences in other languages to the training language)\n\t+ Metrics: Accuracy",
"#### Results\n\n\nGLUE test results (dev set, single model, single-task fine-tuning): 90.2 on MNLI\n\n\nXNLI test results:\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and hours used based on the associated paper.\n\n\n* Hardware Type: 1024 V100 GPUs\n* Hours used: 24 hours (one day)\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details."
] |
text-classification | transformers |
# RoBERTa Large OpenAI Detector
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
**Model Description:** RoBERTa large OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa large model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the [largest GPT-2 model](https://huggingface.co/gpt2-xl), the 1.5B parameter version.
- **Developed by:** OpenAI, see [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector) and [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for full author list
- **Model Type:** Fine-tuned transformer-based language model
- **Language(s):** English
- **License:** MIT
- **Related Models:** [RoBERTa large](https://huggingface.co/roberta-large), [GPT-XL (1.5B parameter version)](https://huggingface.co/gpt2-xl), [GPT-Large (the 774M parameter version)](https://huggingface.co/gpt2-large), [GPT-Medium (the 355M parameter version)](https://huggingface.co/gpt2-medium) and [GPT-2 (the 124M parameter version)](https://huggingface.co/gpt2)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) (see, in particular, the section beginning on page 12 about Automated ML-based detection).
- [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector)
- [OpenAI Blog Post](https://openai.com/blog/gpt-2-1-5b-release/)
- [Explore the detector model here](https://huggingface.co/openai-detector )
## Uses
#### Direct Use
The model is a classifier that can be used to detect text generated by GPT-2 models.
#### Downstream Use
The model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further discussion.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
In their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research.
In a related [blog post](https://openai.com/blog/gpt-2-1-5b-release/), the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write:
> We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective.
The model developers also [report](https://openai.com/blog/gpt-2-1-5b-release/) finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness.
#### Bias
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by RoBERTa large and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the [RoBERTa large](https://huggingface.co/roberta-large) and [GPT-2 XL](https://huggingface.co/gpt2-xl) model cards for more information). The developers of this model discuss these issues further in their [paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf).
## Training
#### Training Data
The model is a sequence classifier based on RoBERTa large (see the [RoBERTa large model card](https://huggingface.co/roberta-large) for more details on the RoBERTa large training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available [here](https://github.com/openai/gpt-2-output-dataset)).
#### Training Procedure
The model developers write that:
> We based a sequence classifier on RoBERTaLARGE (355 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model.
They later state:
> To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance.
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the training procedure.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf).
#### Testing Data, Factors and Metrics
The model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by:
> testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training.
#### Results
The model developers [find](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf):
> Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling ([Holtzman et al., 2019](https://arxiv.org/abs/1904.09751). Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling.
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), Figure 1 (on page 14) and Figure 2 (on page 16) for full results.
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
The model developers write that:
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the modeling architecture and training details.
## Citation Information
```bibtex
@article{solaiman2019release,
title={Release strategies and the social impacts of language models},
author={Solaiman, Irene and Brundage, Miles and Clark, Jack and Askell, Amanda and Herbert-Voss, Ariel and Wu, Jeff and Radford, Alec and Krueger, Gretchen and Kim, Jong Wook and Kreps, Sarah and others},
journal={arXiv preprint arXiv:1908.09203},
year={2019}
}
```
APA:
- Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., ... & Wang, J. (2019). Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203.
https://huggingface.co/papers/1908.09203
## Model Card Authors
This model card was written by the team at Hugging Face.
## How to Get Started with the Model
More information needed | {"language": "en", "license": "mit", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]} | openai-community/roberta-large-openai-detector | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1904.09751",
"arxiv:1910.09700",
"arxiv:1908.09203",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1904.09751",
"1910.09700",
"1908.09203"
] | [
"en"
] | TAGS
#transformers #pytorch #jax #roberta #text-classification #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1904.09751 #arxiv-1910.09700 #arxiv-1908.09203 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# RoBERTa Large OpenAI Detector
## Table of Contents
- Model Details
- Uses
- Risks, Limitations and Biases
- Training
- Evaluation
- Environmental Impact
- Technical Specifications
- Citation Information
- Model Card Authors
- How To Get Started With the Model
## Model Details
Model Description: RoBERTa large OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa large model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the largest GPT-2 model, the 1.5B parameter version.
- Developed by: OpenAI, see GitHub Repo and associated paper for full author list
- Model Type: Fine-tuned transformer-based language model
- Language(s): English
- License: MIT
- Related Models: RoBERTa large, GPT-XL (1.5B parameter version), GPT-Large (the 774M parameter version), GPT-Medium (the 355M parameter version) and GPT-2 (the 124M parameter version)
- Resources for more information:
- Research Paper (see, in particular, the section beginning on page 12 about Automated ML-based detection).
- GitHub Repo
- OpenAI Blog Post
- Explore the detector model here
## Uses
#### Direct Use
The model is a classifier that can be used to detect text generated by GPT-2 models.
#### Downstream Use
The model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the associated paper for further discussion.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their associated paper, suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model.
## Risks, Limitations and Biases
CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
In their associated paper, the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research.
In a related blog post, the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write:
> We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective.
The model developers also report finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness.
#### Bias
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by RoBERTa large and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the RoBERTa large and GPT-2 XL model cards for more information). The developers of this model discuss these issues further in their paper.
## Training
#### Training Data
The model is a sequence classifier based on RoBERTa large (see the RoBERTa large model card for more details on the RoBERTa large training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available here).
#### Training Procedure
The model developers write that:
> We based a sequence classifier on RoBERTaLARGE (355 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model.
They later state:
> To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance.
See the associated paper for further details on the training procedure.
## Evaluation
The following evaluation information is extracted from the associated paper.
#### Testing Data, Factors and Metrics
The model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by:
> testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training.
#### Results
The model developers find:
> Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling (Holtzman et al., 2019. Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling.
See the associated paper, Figure 1 (on page 14) and Figure 2 (on page 16) for full results.
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: Unknown
- Hours used: Unknown
- Cloud Provider: Unknown
- Compute Region: Unknown
- Carbon Emitted: Unknown
## Technical Specifications
The model developers write that:
See the associated paper for further details on the modeling architecture and training details.
APA:
- Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., ... & Wang, J. (2019). Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203.
URL
## Model Card Authors
This model card was written by the team at Hugging Face.
## How to Get Started with the Model
More information needed | [
"# RoBERTa Large OpenAI Detector",
"## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- Technical Specifications\n- Citation Information\n- Model Card Authors\n- How To Get Started With the Model",
"## Model Details\n\nModel Description: RoBERTa large OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa large model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the largest GPT-2 model, the 1.5B parameter version. \n\n- Developed by: OpenAI, see GitHub Repo and associated paper for full author list\n- Model Type: Fine-tuned transformer-based language model\n- Language(s): English\n- License: MIT\n- Related Models: RoBERTa large, GPT-XL (1.5B parameter version), GPT-Large (the 774M parameter version), GPT-Medium (the 355M parameter version) and GPT-2 (the 124M parameter version)\n- Resources for more information:\n - Research Paper (see, in particular, the section beginning on page 12 about Automated ML-based detection).\n - GitHub Repo\n - OpenAI Blog Post\n - Explore the detector model here",
"## Uses",
"#### Direct Use\n\nThe model is a classifier that can be used to detect text generated by GPT-2 models.",
"#### Downstream Use\n\nThe model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the associated paper for further discussion.",
"#### Misuse and Out-of-scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their associated paper, suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model.",
"## Risks, Limitations and Biases\n\nCONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"#### Risks and Limitations\n\nIn their associated paper, the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research. \n\nIn a related blog post, the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write: \n\n> We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective. \n\nThe model developers also report finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness.",
"#### Bias\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by RoBERTa large and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the RoBERTa large and GPT-2 XL model cards for more information). The developers of this model discuss these issues further in their paper.",
"## Training",
"#### Training Data\n\nThe model is a sequence classifier based on RoBERTa large (see the RoBERTa large model card for more details on the RoBERTa large training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available here).",
"#### Training Procedure\n\nThe model developers write that: \n\n> We based a sequence classifier on RoBERTaLARGE (355 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model.\n\nThey later state: \n\n> To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance.\n\nSee the associated paper for further details on the training procedure.",
"## Evaluation\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\nThe model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by: \n\n> testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training.",
"#### Results\n\nThe model developers find: \n\n> Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling (Holtzman et al., 2019. Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling. \t\n\nSee the associated paper, Figure 1 (on page 14) and Figure 2 (on page 16) for full results.",
"## Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: Unknown\n- Hours used: Unknown\n- Cloud Provider: Unknown\n- Compute Region: Unknown\n- Carbon Emitted: Unknown",
"## Technical Specifications\n\nThe model developers write that: \n\nSee the associated paper for further details on the modeling architecture and training details.\n\n\n\n\n\nAPA: \n- Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., ... & Wang, J. (2019). Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203.\n\nURL",
"## Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"## How to Get Started with the Model \n\nMore information needed"
] | [
"TAGS\n#transformers #pytorch #jax #roberta #text-classification #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1904.09751 #arxiv-1910.09700 #arxiv-1908.09203 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# RoBERTa Large OpenAI Detector",
"## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- Technical Specifications\n- Citation Information\n- Model Card Authors\n- How To Get Started With the Model",
"## Model Details\n\nModel Description: RoBERTa large OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa large model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the largest GPT-2 model, the 1.5B parameter version. \n\n- Developed by: OpenAI, see GitHub Repo and associated paper for full author list\n- Model Type: Fine-tuned transformer-based language model\n- Language(s): English\n- License: MIT\n- Related Models: RoBERTa large, GPT-XL (1.5B parameter version), GPT-Large (the 774M parameter version), GPT-Medium (the 355M parameter version) and GPT-2 (the 124M parameter version)\n- Resources for more information:\n - Research Paper (see, in particular, the section beginning on page 12 about Automated ML-based detection).\n - GitHub Repo\n - OpenAI Blog Post\n - Explore the detector model here",
"## Uses",
"#### Direct Use\n\nThe model is a classifier that can be used to detect text generated by GPT-2 models.",
"#### Downstream Use\n\nThe model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the associated paper for further discussion.",
"#### Misuse and Out-of-scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their associated paper, suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model.",
"## Risks, Limitations and Biases\n\nCONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"#### Risks and Limitations\n\nIn their associated paper, the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research. \n\nIn a related blog post, the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write: \n\n> We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective. \n\nThe model developers also report finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness.",
"#### Bias\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by RoBERTa large and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the RoBERTa large and GPT-2 XL model cards for more information). The developers of this model discuss these issues further in their paper.",
"## Training",
"#### Training Data\n\nThe model is a sequence classifier based on RoBERTa large (see the RoBERTa large model card for more details on the RoBERTa large training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available here).",
"#### Training Procedure\n\nThe model developers write that: \n\n> We based a sequence classifier on RoBERTaLARGE (355 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model.\n\nThey later state: \n\n> To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance.\n\nSee the associated paper for further details on the training procedure.",
"## Evaluation\n\nThe following evaluation information is extracted from the associated paper.",
"#### Testing Data, Factors and Metrics\n\nThe model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by: \n\n> testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training.",
"#### Results\n\nThe model developers find: \n\n> Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling (Holtzman et al., 2019. Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling. \t\n\nSee the associated paper, Figure 1 (on page 14) and Figure 2 (on page 16) for full results.",
"## Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: Unknown\n- Hours used: Unknown\n- Cloud Provider: Unknown\n- Compute Region: Unknown\n- Carbon Emitted: Unknown",
"## Technical Specifications\n\nThe model developers write that: \n\nSee the associated paper for further details on the modeling architecture and training details.\n\n\n\n\n\nAPA: \n- Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., ... & Wang, J. (2019). Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203.\n\nURL",
"## Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"## How to Get Started with the Model \n\nMore information needed"
] |
fill-mask | transformers |
# RoBERTa large model
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it
makes a difference between english and English.
Disclaimer: The team releasing RoBERTa did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=roberta) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='roberta-large')
>>> unmasker("Hello I'm a <mask> model.")
[{'sequence': "<s>Hello I'm a male model.</s>",
'score': 0.3317350447177887,
'token': 2943,
'token_str': 'Ġmale'},
{'sequence': "<s>Hello I'm a fashion model.</s>",
'score': 0.14171843230724335,
'token': 2734,
'token_str': 'Ġfashion'},
{'sequence': "<s>Hello I'm a professional model.</s>",
'score': 0.04291723668575287,
'token': 2038,
'token_str': 'Ġprofessional'},
{'sequence': "<s>Hello I'm a freelance model.</s>",
'score': 0.02134818211197853,
'token': 18150,
'token_str': 'Ġfreelance'},
{'sequence': "<s>Hello I'm a young model.</s>",
'score': 0.021098261699080467,
'token': 664,
'token_str': 'Ġyoung'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('roberta-large')
model = RobertaModel.from_pretrained('roberta-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('roberta-large')
model = TFRobertaModel.from_pretrained('roberta-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='roberta-large')
>>> unmasker("The man worked as a <mask>.")
[{'sequence': '<s>The man worked as a mechanic.</s>',
'score': 0.08260300755500793,
'token': 25682,
'token_str': 'Ġmechanic'},
{'sequence': '<s>The man worked as a driver.</s>',
'score': 0.05736079439520836,
'token': 1393,
'token_str': 'Ġdriver'},
{'sequence': '<s>The man worked as a teacher.</s>',
'score': 0.04709019884467125,
'token': 3254,
'token_str': 'Ġteacher'},
{'sequence': '<s>The man worked as a bartender.</s>',
'score': 0.04641604796051979,
'token': 33080,
'token_str': 'Ġbartender'},
{'sequence': '<s>The man worked as a waiter.</s>',
'score': 0.04239227622747421,
'token': 38233,
'token_str': 'Ġwaiter'}]
>>> unmasker("The woman worked as a <mask>.")
[{'sequence': '<s>The woman worked as a nurse.</s>',
'score': 0.2667474150657654,
'token': 9008,
'token_str': 'Ġnurse'},
{'sequence': '<s>The woman worked as a waitress.</s>',
'score': 0.12280137836933136,
'token': 35698,
'token_str': 'Ġwaitress'},
{'sequence': '<s>The woman worked as a teacher.</s>',
'score': 0.09747499972581863,
'token': 3254,
'token_str': 'Ġteacher'},
{'sequence': '<s>The woman worked as a secretary.</s>',
'score': 0.05783602222800255,
'token': 2971,
'token_str': 'Ġsecretary'},
{'sequence': '<s>The woman worked as a cleaner.</s>',
'score': 0.05576248839497566,
'token': 16126,
'token_str': 'Ġcleaner'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The RoBERTa model was pretrained on the reunion of five datasets:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books;
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers) ;
- [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/), a dataset containing 63 millions English news
articles crawled between September 2016 and February 2019.
- [OpenWebText](https://github.com/jcpeterson/openwebtext), an opensource recreation of the WebText dataset used to
train GPT-2,
- [Stories](https://arxiv.org/abs/1806.02847) a dataset containing a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas.
Together theses datasets weight 160GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with `<s>` and the end of one by `</s>`
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
### Pretraining
The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The
optimizer used is Adam with a learning rate of 4e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and
\\(\epsilon = 1e-6\\), a weight decay of 0.01, learning rate warmup for 30,000 steps and linear decay of the learning
rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
| | 90.2 | 92.2 | 94.7 | 96.4 | 68.0 | 96.4 | 90.9 | 86.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1907-11692,
author = {Yinhan Liu and
Myle Ott and
Naman Goyal and
Jingfei Du and
Mandar Joshi and
Danqi Chen and
Omer Levy and
Mike Lewis and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach},
journal = {CoRR},
volume = {abs/1907.11692},
year = {2019},
url = {http://arxiv.org/abs/1907.11692},
archivePrefix = {arXiv},
eprint = {1907.11692},
timestamp = {Thu, 01 Aug 2019 08:59:33 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=roberta-base">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "mit", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]} | FacebookAI/roberta-large | null | [
"transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"roberta",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1907.11692",
"arxiv:1806.02847",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1907.11692",
"1806.02847"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #onnx #safetensors #roberta #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1907.11692 #arxiv-1806.02847 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
| RoBERTa large model
===================
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is case-sensitive: it
makes a difference between english and English.
Disclaimer: The team releasing RoBERTa did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
Intended uses & limitations
---------------------------
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the model hub to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The RoBERTa model was pretrained on the reunion of five datasets:
* BookCorpus, a dataset consisting of 11,038 unpublished books;
* English Wikipedia (excluding lists, tables and headers) ;
* CC-News, a dataset containing 63 millions English news
articles crawled between September 2016 and February 2019.
* OpenWebText, an opensource recreation of the WebText dataset used to
train GPT-2,
* Stories a dataset containing a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas.
Together theses datasets weight 160GB of text.
Training procedure
------------------
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with '~~' and the end of one by '~~'
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by ''.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
### Pretraining
The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The
optimizer used is Adam with a learning rate of 4e-4, \(\beta\_{1} = 0.9\), \(\beta\_{2} = 0.98\) and
\(\epsilon = 1e-6\), a weight decay of 0.01, learning rate warmup for 30,000 steps and linear decay of the learning
rate after.
Evaluation results
------------------
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from\nneutral. Therefore, the model can have biased predictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe RoBERTa model was pretrained on the reunion of five datasets:\n\n\n* BookCorpus, a dataset consisting of 11,038 unpublished books;\n* English Wikipedia (excluding lists, tables and headers) ;\n* CC-News, a dataset containing 63 millions English news\narticles crawled between September 2016 and February 2019.\n* OpenWebText, an opensource recreation of the WebText dataset used to\ntrain GPT-2,\n* Stories a dataset containing a subset of CommonCrawl data filtered to match the\nstory-like style of Winograd schemas.\n\n\nTogether theses datasets weight 160GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of\nthe model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked\nwith '~~' and the end of one by '~~'\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by ''.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nContrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).",
"### Pretraining\n\n\nThe model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The\noptimizer used is Adam with a learning rate of 4e-4, \\(\\beta\\_{1} = 0.9\\), \\(\\beta\\_{2} = 0.98\\) and\n\\(\\epsilon = 1e-6\\), a weight decay of 0.01, learning rate warmup for 30,000 steps and linear decay of the learning\nrate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nGlue test results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] | [
"TAGS\n#transformers #pytorch #tf #jax #onnx #safetensors #roberta #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1907.11692 #arxiv-1806.02847 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from\nneutral. Therefore, the model can have biased predictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe RoBERTa model was pretrained on the reunion of five datasets:\n\n\n* BookCorpus, a dataset consisting of 11,038 unpublished books;\n* English Wikipedia (excluding lists, tables and headers) ;\n* CC-News, a dataset containing 63 millions English news\narticles crawled between September 2016 and February 2019.\n* OpenWebText, an opensource recreation of the WebText dataset used to\ntrain GPT-2,\n* Stories a dataset containing a subset of CommonCrawl data filtered to match the\nstory-like style of Winograd schemas.\n\n\nTogether theses datasets weight 160GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of\nthe model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked\nwith '~~' and the end of one by '~~'\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by ''.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nContrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).",
"### Pretraining\n\n\nThe model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The\noptimizer used is Adam with a learning rate of 4e-4, \\(\\beta\\_{1} = 0.9\\), \\(\\beta\\_{2} = 0.98\\) and\n\\(\\epsilon = 1e-6\\), a weight decay of 0.01, learning rate warmup for 30,000 steps and linear decay of the learning\nrate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nGlue test results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] |
translation | transformers |
# Model Card for T5 11B
![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Citation](#citation)
8. [Model Card Authors](#model-card-authors)
9. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html):
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-11B is the checkpoint with 11 billion parameters.
- **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
- **Model type:** Language model
- **Language(s) (NLP):** English, French, Romanian, German
- **License:** Apache 2.0
- **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5)
- **Resources for more information:**
- [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
- [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
- [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer)
- [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5)
# Uses
## Direct Use and Downstream Use
The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Recommendations
More information needed.
# Training Details
## Training Data
The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
Thereby, the following datasets were being used for (1.) and (2.):
1. **Datasets used for Unsupervised denoising objective**:
- [C4](https://huggingface.co/datasets/c4)
- [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr)
2. **Datasets used for Supervised text-to-text language modeling objective**
- Sentence acceptability judgment
- CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471)
- Sentiment analysis
- SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)
- Paraphrasing/sentence similarity
- MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002)
- STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055)
- QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
- Natural language inference
- MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426)
- QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250)
- RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9)
- CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf)
- Sentence completion
- COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning)
- Word sense disambiguation
- WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121)
- Question answering
- MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023)
- ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885)
- BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044)
## Training Procedure
In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write:
> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
# Evaluation
## Testing Data, Factors & Metrics
The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details.
## Results
For full results for T5-11B, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
```
**APA:**
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
## Disclaimer
**Before `transformers` v3.5.0**, due do its immense size, `t5-11b` required some special treatment.
If you're using transformers `<= v3.4.0`, `t5-11b` should be loaded with flag `use_cdn` set to `False` as follows:
```python
t5 = transformers.T5ForConditionalGeneration.from_pretrained('t5-11b', use_cdn = False)
```
Secondly, a single GPU will most likely not have enough memory to even load the model into memory as the weights alone amount to over 40 GB.
- Model parallelism has to be used here to overcome this problem as is explained in this [PR](https://github.com/huggingface/transformers/pull/3578).
- DeepSpeed's ZeRO-Offload is another approach as explained in this [post](https://github.com/huggingface/transformers/issues/9996).
See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more context.
| {"language": ["en", "fr", "ro", "de", "multilingual"], "license": "apache-2.0", "tags": ["summarization", "translation"], "datasets": ["c4"], "inference": false} | google-t5/t5-11b | null | [
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:c4",
"arxiv:1805.12471",
"arxiv:1708.00055",
"arxiv:1704.05426",
"arxiv:1606.05250",
"arxiv:1808.09121",
"arxiv:1810.12885",
"arxiv:1905.10044",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1805.12471",
"1708.00055",
"1704.05426",
"1606.05250",
"1808.09121",
"1810.12885",
"1905.10044",
"1910.09700"
] | [
"en",
"fr",
"ro",
"de",
"multilingual"
] | TAGS
#transformers #pytorch #tf #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
# Model Card for T5 11B
!model image
# Table of Contents
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training Details
5. Evaluation
6. Environmental Impact
7. Citation
8. Model Card Authors
9. How To Get Started With the Model
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) write:
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-11B is the checkpoint with 11 billion parameters.
- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See associated paper and GitHub repo
- Model type: Language model
- Language(s) (NLP): English, French, Romanian, German
- License: Apache 2.0
- Related Models: All T5 Checkpoints
- Resources for more information:
- Research paper
- Google's T5 Blog Post
- GitHub Repo
- Hugging Face T5 Docs
# Uses
## Direct Use and Downstream Use
The developers write in a blog post that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
See the blog post and research paper for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Recommendations
More information needed.
# Training Details
## Training Data
The model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.
The model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).
Thereby, the following datasets were being used for (1.) and (2.):
1. Datasets used for Unsupervised denoising objective:
- C4
- Wiki-DPR
2. Datasets used for Supervised text-to-text language modeling objective
- Sentence acceptability judgment
- CoLA Warstadt et al., 2018
- Sentiment analysis
- SST-2 Socher et al., 2013
- Paraphrasing/sentence similarity
- MRPC Dolan and Brockett, 2005
- STS-B Ceret al., 2017
- QQP Iyer et al., 2017
- Natural language inference
- MNLI Williams et al., 2017
- QNLI Rajpurkar et al.,2016
- RTE Dagan et al., 2005
- CB De Marneff et al., 2019
- Sentence completion
- COPA Roemmele et al., 2011
- Word sense disambiguation
- WIC Pilehvar and Camacho-Collados, 2018
- Question answering
- MultiRC Khashabi et al., 2018
- ReCoRD Zhang et al., 2018
- BoolQ Clark et al., 2019
## Training Procedure
In their abstract, the model developers write:
> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details.
# Evaluation
## Testing Data, Factors & Metrics
The developers evaluated the model on 24 tasks, see the research paper for full details.
## Results
For full results for T5-11B, see the research paper, Table 14.
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: Google Cloud TPU Pods
- Hours used: More information needed
- Cloud Provider: GCP
- Compute Region: More information needed
- Carbon Emitted: More information needed
BibTeX:
APA:
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
## Disclaimer
Before 'transformers' v3.5.0, due do its immense size, 't5-11b' required some special treatment.
If you're using transformers '<= v3.4.0', 't5-11b' should be loaded with flag 'use_cdn' set to 'False' as follows:
Secondly, a single GPU will most likely not have enough memory to even load the model into memory as the weights alone amount to over 40 GB.
- Model parallelism has to be used here to overcome this problem as is explained in this PR.
- DeepSpeed's ZeRO-Offload is another approach as explained in this post.
See the Hugging Face T5 docs and a Colab Notebook created by the model developers for more context.
| [
"# Model Card for T5 11B\n\n!model image",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training Details\n5. Evaluation\n6. Environmental Impact\n7. Citation\n8. Model Card Authors\n9. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe developers of the Text-To-Text Transfer Transformer (T5) write: \n\n> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.\n\nT5-11B is the checkpoint with 11 billion parameters. \n\n- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See associated paper and GitHub repo\n- Model type: Language model\n- Language(s) (NLP): English, French, Romanian, German\n- License: Apache 2.0\n- Related Models: All T5 Checkpoints\n- Resources for more information:\n - Research paper\n - Google's T5 Blog Post \n - GitHub Repo\n - Hugging Face T5 Docs",
"# Uses",
"## Direct Use and Downstream Use\n\nThe developers write in a blog post that the model: \n\n> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.\n\nSee the blog post and research paper for further details.",
"## Out-of-Scope Use\n\nMore information needed.",
"# Bias, Risks, and Limitations\n\nMore information needed.",
"## Recommendations\n\nMore information needed.",
"# Training Details",
"## Training Data\n\nThe model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.\n\nThe model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).\nThereby, the following datasets were being used for (1.) and (2.):\n\n1. Datasets used for Unsupervised denoising objective:\n\n- C4\n- Wiki-DPR\n\n\n2. Datasets used for Supervised text-to-text language modeling objective\n\n- Sentence acceptability judgment\n - CoLA Warstadt et al., 2018\n- Sentiment analysis \n - SST-2 Socher et al., 2013\n- Paraphrasing/sentence similarity\n - MRPC Dolan and Brockett, 2005\n - STS-B Ceret al., 2017\n - QQP Iyer et al., 2017\n- Natural language inference\n - MNLI Williams et al., 2017\n - QNLI Rajpurkar et al.,2016\n - RTE Dagan et al., 2005 \n - CB De Marneff et al., 2019\n- Sentence completion\n - COPA Roemmele et al., 2011\n- Word sense disambiguation\n - WIC Pilehvar and Camacho-Collados, 2018\n- Question answering\n - MultiRC Khashabi et al., 2018\n - ReCoRD Zhang et al., 2018\n - BoolQ Clark et al., 2019",
"## Training Procedure\n\nIn their abstract, the model developers write: \n\n> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. \n\nThe framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nThe developers evaluated the model on 24 tasks, see the research paper for full details.",
"## Results \n\nFor full results for T5-11B, see the research paper, Table 14.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: Google Cloud TPU Pods\n- Hours used: More information needed\n- Cloud Provider: GCP\n- Compute Region: More information needed\n- Carbon Emitted: More information needed\n\nBibTeX:\n\n\n\nAPA:\n- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model",
"## Disclaimer\n\nBefore 'transformers' v3.5.0, due do its immense size, 't5-11b' required some special treatment. \nIf you're using transformers '<= v3.4.0', 't5-11b' should be loaded with flag 'use_cdn' set to 'False' as follows:\n\n\n\nSecondly, a single GPU will most likely not have enough memory to even load the model into memory as the weights alone amount to over 40 GB.\n- Model parallelism has to be used here to overcome this problem as is explained in this PR.\n- DeepSpeed's ZeRO-Offload is another approach as explained in this post.\n\nSee the Hugging Face T5 docs and a Colab Notebook created by the model developers for more context."
] | [
"TAGS\n#transformers #pytorch #tf #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n",
"# Model Card for T5 11B\n\n!model image",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training Details\n5. Evaluation\n6. Environmental Impact\n7. Citation\n8. Model Card Authors\n9. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe developers of the Text-To-Text Transfer Transformer (T5) write: \n\n> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.\n\nT5-11B is the checkpoint with 11 billion parameters. \n\n- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See associated paper and GitHub repo\n- Model type: Language model\n- Language(s) (NLP): English, French, Romanian, German\n- License: Apache 2.0\n- Related Models: All T5 Checkpoints\n- Resources for more information:\n - Research paper\n - Google's T5 Blog Post \n - GitHub Repo\n - Hugging Face T5 Docs",
"# Uses",
"## Direct Use and Downstream Use\n\nThe developers write in a blog post that the model: \n\n> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.\n\nSee the blog post and research paper for further details.",
"## Out-of-Scope Use\n\nMore information needed.",
"# Bias, Risks, and Limitations\n\nMore information needed.",
"## Recommendations\n\nMore information needed.",
"# Training Details",
"## Training Data\n\nThe model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.\n\nThe model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).\nThereby, the following datasets were being used for (1.) and (2.):\n\n1. Datasets used for Unsupervised denoising objective:\n\n- C4\n- Wiki-DPR\n\n\n2. Datasets used for Supervised text-to-text language modeling objective\n\n- Sentence acceptability judgment\n - CoLA Warstadt et al., 2018\n- Sentiment analysis \n - SST-2 Socher et al., 2013\n- Paraphrasing/sentence similarity\n - MRPC Dolan and Brockett, 2005\n - STS-B Ceret al., 2017\n - QQP Iyer et al., 2017\n- Natural language inference\n - MNLI Williams et al., 2017\n - QNLI Rajpurkar et al.,2016\n - RTE Dagan et al., 2005 \n - CB De Marneff et al., 2019\n- Sentence completion\n - COPA Roemmele et al., 2011\n- Word sense disambiguation\n - WIC Pilehvar and Camacho-Collados, 2018\n- Question answering\n - MultiRC Khashabi et al., 2018\n - ReCoRD Zhang et al., 2018\n - BoolQ Clark et al., 2019",
"## Training Procedure\n\nIn their abstract, the model developers write: \n\n> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. \n\nThe framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nThe developers evaluated the model on 24 tasks, see the research paper for full details.",
"## Results \n\nFor full results for T5-11B, see the research paper, Table 14.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: Google Cloud TPU Pods\n- Hours used: More information needed\n- Cloud Provider: GCP\n- Compute Region: More information needed\n- Carbon Emitted: More information needed\n\nBibTeX:\n\n\n\nAPA:\n- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model",
"## Disclaimer\n\nBefore 'transformers' v3.5.0, due do its immense size, 't5-11b' required some special treatment. \nIf you're using transformers '<= v3.4.0', 't5-11b' should be loaded with flag 'use_cdn' set to 'False' as follows:\n\n\n\nSecondly, a single GPU will most likely not have enough memory to even load the model into memory as the weights alone amount to over 40 GB.\n- Model parallelism has to be used here to overcome this problem as is explained in this PR.\n- DeepSpeed's ZeRO-Offload is another approach as explained in this post.\n\nSee the Hugging Face T5 docs and a Colab Notebook created by the model developers for more context."
] |
translation | transformers |
# Model Card for T5-3B
![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Citation](#citation)
8. [Model Card Authors](#model-card-authors)
9. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html):
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-3B is the checkpoint with 3 billion parameters.
- **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
- **Model type:** Language model
- **Language(s) (NLP):** English, French, Romanian, German
- **License:** Apache 2.0
- **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5)
- **Resources for more information:**
- [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
- [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
- [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer)
- [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5)
# Uses
## Direct Use and Downstream Use
The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Recommendations
More information needed.
# Training Details
## Training Data
The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
Thereby, the following datasets were being used for (1.) and (2.):
1. **Datasets used for Unsupervised denoising objective**:
- [C4](https://huggingface.co/datasets/c4)
- [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr)
2. **Datasets used for Supervised text-to-text language modeling objective**
- Sentence acceptability judgment
- CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471)
- Sentiment analysis
- SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)
- Paraphrasing/sentence similarity
- MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002)
- STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055)
- QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
- Natural language inference
- MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426)
- QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250)
- RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9)
- CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf)
- Sentence completion
- COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning)
- Word sense disambiguation
- WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121)
- Question answering
- MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023)
- ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885)
- BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044)
## Training Procedure
In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write:
> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
# Evaluation
## Testing Data, Factors & Metrics
The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details.
## Results
For full results for T5-3B, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
```
**APA:**
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more context on how to get started with this checkpoint.
| {"language": ["en", "fr", "ro", "de", "multilingual"], "license": "apache-2.0", "tags": ["summarization", "translation"], "datasets": ["c4"]} | google-t5/t5-3b | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:c4",
"arxiv:1805.12471",
"arxiv:1708.00055",
"arxiv:1704.05426",
"arxiv:1606.05250",
"arxiv:1808.09121",
"arxiv:1810.12885",
"arxiv:1905.10044",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1805.12471",
"1708.00055",
"1704.05426",
"1606.05250",
"1808.09121",
"1810.12885",
"1905.10044",
"1910.09700"
] | [
"en",
"fr",
"ro",
"de",
"multilingual"
] | TAGS
#transformers #pytorch #tf #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Model Card for T5-3B
!model image
# Table of Contents
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training Details
5. Evaluation
6. Environmental Impact
7. Citation
8. Model Card Authors
9. How To Get Started With the Model
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) write:
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-3B is the checkpoint with 3 billion parameters.
- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See associated paper and GitHub repo
- Model type: Language model
- Language(s) (NLP): English, French, Romanian, German
- License: Apache 2.0
- Related Models: All T5 Checkpoints
- Resources for more information:
- Research paper
- Google's T5 Blog Post
- GitHub Repo
- Hugging Face T5 Docs
# Uses
## Direct Use and Downstream Use
The developers write in a blog post that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
See the blog post and research paper for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Recommendations
More information needed.
# Training Details
## Training Data
The model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.
The model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).
Thereby, the following datasets were being used for (1.) and (2.):
1. Datasets used for Unsupervised denoising objective:
- C4
- Wiki-DPR
2. Datasets used for Supervised text-to-text language modeling objective
- Sentence acceptability judgment
- CoLA Warstadt et al., 2018
- Sentiment analysis
- SST-2 Socher et al., 2013
- Paraphrasing/sentence similarity
- MRPC Dolan and Brockett, 2005
- STS-B Ceret al., 2017
- QQP Iyer et al., 2017
- Natural language inference
- MNLI Williams et al., 2017
- QNLI Rajpurkar et al.,2016
- RTE Dagan et al., 2005
- CB De Marneff et al., 2019
- Sentence completion
- COPA Roemmele et al., 2011
- Word sense disambiguation
- WIC Pilehvar and Camacho-Collados, 2018
- Question answering
- MultiRC Khashabi et al., 2018
- ReCoRD Zhang et al., 2018
- BoolQ Clark et al., 2019
## Training Procedure
In their abstract, the model developers write:
> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details.
# Evaluation
## Testing Data, Factors & Metrics
The developers evaluated the model on 24 tasks, see the research paper for full details.
## Results
For full results for T5-3B, see the research paper, Table 14.
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: Google Cloud TPU Pods
- Hours used: More information needed
- Cloud Provider: GCP
- Compute Region: More information needed
- Carbon Emitted: More information needed
BibTeX:
APA:
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
See the Hugging Face T5 docs and a Colab Notebook created by the model developers for more context on how to get started with this checkpoint.
| [
"# Model Card for T5-3B\n\n!model image",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training Details\n5. Evaluation\n6. Environmental Impact\n7. Citation\n8. Model Card Authors\n9. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe developers of the Text-To-Text Transfer Transformer (T5) write: \n\n> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.\n\nT5-3B is the checkpoint with 3 billion parameters. \n\n- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See associated paper and GitHub repo\n- Model type: Language model\n- Language(s) (NLP): English, French, Romanian, German\n- License: Apache 2.0\n- Related Models: All T5 Checkpoints\n- Resources for more information:\n - Research paper\n - Google's T5 Blog Post \n - GitHub Repo\n - Hugging Face T5 Docs",
"# Uses",
"## Direct Use and Downstream Use\n\nThe developers write in a blog post that the model: \n\n> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.\n\nSee the blog post and research paper for further details.",
"## Out-of-Scope Use\n\nMore information needed.",
"# Bias, Risks, and Limitations\n\nMore information needed.",
"## Recommendations\n\nMore information needed.",
"# Training Details",
"## Training Data\n\nThe model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.\n\nThe model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).\nThereby, the following datasets were being used for (1.) and (2.):\n\n1. Datasets used for Unsupervised denoising objective:\n\n- C4\n- Wiki-DPR\n\n\n2. Datasets used for Supervised text-to-text language modeling objective\n\n- Sentence acceptability judgment\n - CoLA Warstadt et al., 2018\n- Sentiment analysis \n - SST-2 Socher et al., 2013\n- Paraphrasing/sentence similarity\n - MRPC Dolan and Brockett, 2005\n - STS-B Ceret al., 2017\n - QQP Iyer et al., 2017\n- Natural language inference\n - MNLI Williams et al., 2017\n - QNLI Rajpurkar et al.,2016\n - RTE Dagan et al., 2005 \n - CB De Marneff et al., 2019\n- Sentence completion\n - COPA Roemmele et al., 2011\n- Word sense disambiguation\n - WIC Pilehvar and Camacho-Collados, 2018\n- Question answering\n - MultiRC Khashabi et al., 2018\n - ReCoRD Zhang et al., 2018\n - BoolQ Clark et al., 2019",
"## Training Procedure\n\nIn their abstract, the model developers write: \n\n> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. \n\nThe framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nThe developers evaluated the model on 24 tasks, see the research paper for full details.",
"## Results \n\nFor full results for T5-3B, see the research paper, Table 14.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: Google Cloud TPU Pods\n- Hours used: More information needed\n- Cloud Provider: GCP\n- Compute Region: More information needed\n- Carbon Emitted: More information needed\n\nBibTeX:\n\n\n\nAPA:\n- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nSee the Hugging Face T5 docs and a Colab Notebook created by the model developers for more context on how to get started with this checkpoint."
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Model Card for T5-3B\n\n!model image",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training Details\n5. Evaluation\n6. Environmental Impact\n7. Citation\n8. Model Card Authors\n9. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe developers of the Text-To-Text Transfer Transformer (T5) write: \n\n> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.\n\nT5-3B is the checkpoint with 3 billion parameters. \n\n- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See associated paper and GitHub repo\n- Model type: Language model\n- Language(s) (NLP): English, French, Romanian, German\n- License: Apache 2.0\n- Related Models: All T5 Checkpoints\n- Resources for more information:\n - Research paper\n - Google's T5 Blog Post \n - GitHub Repo\n - Hugging Face T5 Docs",
"# Uses",
"## Direct Use and Downstream Use\n\nThe developers write in a blog post that the model: \n\n> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.\n\nSee the blog post and research paper for further details.",
"## Out-of-Scope Use\n\nMore information needed.",
"# Bias, Risks, and Limitations\n\nMore information needed.",
"## Recommendations\n\nMore information needed.",
"# Training Details",
"## Training Data\n\nThe model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.\n\nThe model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).\nThereby, the following datasets were being used for (1.) and (2.):\n\n1. Datasets used for Unsupervised denoising objective:\n\n- C4\n- Wiki-DPR\n\n\n2. Datasets used for Supervised text-to-text language modeling objective\n\n- Sentence acceptability judgment\n - CoLA Warstadt et al., 2018\n- Sentiment analysis \n - SST-2 Socher et al., 2013\n- Paraphrasing/sentence similarity\n - MRPC Dolan and Brockett, 2005\n - STS-B Ceret al., 2017\n - QQP Iyer et al., 2017\n- Natural language inference\n - MNLI Williams et al., 2017\n - QNLI Rajpurkar et al.,2016\n - RTE Dagan et al., 2005 \n - CB De Marneff et al., 2019\n- Sentence completion\n - COPA Roemmele et al., 2011\n- Word sense disambiguation\n - WIC Pilehvar and Camacho-Collados, 2018\n- Question answering\n - MultiRC Khashabi et al., 2018\n - ReCoRD Zhang et al., 2018\n - BoolQ Clark et al., 2019",
"## Training Procedure\n\nIn their abstract, the model developers write: \n\n> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. \n\nThe framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nThe developers evaluated the model on 24 tasks, see the research paper for full details.",
"## Results \n\nFor full results for T5-3B, see the research paper, Table 14.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: Google Cloud TPU Pods\n- Hours used: More information needed\n- Cloud Provider: GCP\n- Compute Region: More information needed\n- Carbon Emitted: More information needed\n\nBibTeX:\n\n\n\nAPA:\n- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nSee the Hugging Face T5 docs and a Colab Notebook created by the model developers for more context on how to get started with this checkpoint."
] |
translation | transformers |
# Model Card for T5 Base
![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Citation](#citation)
8. [Model Card Authors](#model-card-authors)
9. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html):
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-Base is the checkpoint with 220 million parameters.
- **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
- **Model type:** Language model
- **Language(s) (NLP):** English, French, Romanian, German
- **License:** Apache 2.0
- **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5)
- **Resources for more information:**
- [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
- [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
- [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer)
- [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5)
# Uses
## Direct Use and Downstream Use
The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Recommendations
More information needed.
# Training Details
## Training Data
The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
Thereby, the following datasets were being used for (1.) and (2.):
1. **Datasets used for Unsupervised denoising objective**:
- [C4](https://huggingface.co/datasets/c4)
- [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr)
2. **Datasets used for Supervised text-to-text language modeling objective**
- Sentence acceptability judgment
- CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471)
- Sentiment analysis
- SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)
- Paraphrasing/sentence similarity
- MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002)
- STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055)
- QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
- Natural language inference
- MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426)
- QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250)
- RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9)
- CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf)
- Sentence completion
- COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning)
- Word sense disambiguation
- WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121)
- Question answering
- MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023)
- ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885)
- BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044)
## Training Procedure
In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write:
> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
# Evaluation
## Testing Data, Factors & Metrics
The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details.
## Results
For full results for T5-Base, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
```
**APA:**
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5Model
tokenizer = T5Tokenizer.from_pretrained("t5-base")
model = T5Model.from_pretrained("t5-base")
input_ids = tokenizer(
"Studies have been shown that owning a dog is good for you", return_tensors="pt"
).input_ids # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
# forward pass
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
```
See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more examples.
</details> | {"language": ["en", "fr", "ro", "de"], "license": "apache-2.0", "tags": ["summarization", "translation"], "datasets": ["c4"], "pipeline_tag": "translation"} | google-t5/t5-base | null | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"dataset:c4",
"arxiv:1805.12471",
"arxiv:1708.00055",
"arxiv:1704.05426",
"arxiv:1606.05250",
"arxiv:1808.09121",
"arxiv:1810.12885",
"arxiv:1905.10044",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1805.12471",
"1708.00055",
"1704.05426",
"1606.05250",
"1808.09121",
"1810.12885",
"1905.10044",
"1910.09700"
] | [
"en",
"fr",
"ro",
"de"
] | TAGS
#transformers #pytorch #tf #jax #rust #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Model Card for T5 Base
!model image
# Table of Contents
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training Details
5. Evaluation
6. Environmental Impact
7. Citation
8. Model Card Authors
9. How To Get Started With the Model
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) write:
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-Base is the checkpoint with 220 million parameters.
- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See associated paper and GitHub repo
- Model type: Language model
- Language(s) (NLP): English, French, Romanian, German
- License: Apache 2.0
- Related Models: All T5 Checkpoints
- Resources for more information:
- Research paper
- Google's T5 Blog Post
- GitHub Repo
- Hugging Face T5 Docs
# Uses
## Direct Use and Downstream Use
The developers write in a blog post that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
See the blog post and research paper for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Recommendations
More information needed.
# Training Details
## Training Data
The model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.
The model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).
Thereby, the following datasets were being used for (1.) and (2.):
1. Datasets used for Unsupervised denoising objective:
- C4
- Wiki-DPR
2. Datasets used for Supervised text-to-text language modeling objective
- Sentence acceptability judgment
- CoLA Warstadt et al., 2018
- Sentiment analysis
- SST-2 Socher et al., 2013
- Paraphrasing/sentence similarity
- MRPC Dolan and Brockett, 2005
- STS-B Ceret al., 2017
- QQP Iyer et al., 2017
- Natural language inference
- MNLI Williams et al., 2017
- QNLI Rajpurkar et al.,2016
- RTE Dagan et al., 2005
- CB De Marneff et al., 2019
- Sentence completion
- COPA Roemmele et al., 2011
- Word sense disambiguation
- WIC Pilehvar and Camacho-Collados, 2018
- Question answering
- MultiRC Khashabi et al., 2018
- ReCoRD Zhang et al., 2018
- BoolQ Clark et al., 2019
## Training Procedure
In their abstract, the model developers write:
> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details.
# Evaluation
## Testing Data, Factors & Metrics
The developers evaluated the model on 24 tasks, see the research paper for full details.
## Results
For full results for T5-Base, see the research paper, Table 14.
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: Google Cloud TPU Pods
- Hours used: More information needed
- Cloud Provider: GCP
- Compute Region: More information needed
- Carbon Emitted: More information needed
BibTeX:
APA:
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
See the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples.
</details> | [
"# Model Card for T5 Base\n\n!model image",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training Details\n5. Evaluation\n6. Environmental Impact\n7. Citation\n8. Model Card Authors\n9. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe developers of the Text-To-Text Transfer Transformer (T5) write: \n\n> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.\n\nT5-Base is the checkpoint with 220 million parameters. \n\n- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See associated paper and GitHub repo\n- Model type: Language model\n- Language(s) (NLP): English, French, Romanian, German\n- License: Apache 2.0\n- Related Models: All T5 Checkpoints\n- Resources for more information:\n - Research paper\n - Google's T5 Blog Post \n - GitHub Repo\n - Hugging Face T5 Docs",
"# Uses",
"## Direct Use and Downstream Use\n\nThe developers write in a blog post that the model: \n\n> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.\n\nSee the blog post and research paper for further details.",
"## Out-of-Scope Use\n\nMore information needed.",
"# Bias, Risks, and Limitations\n\nMore information needed.",
"## Recommendations\n\nMore information needed.",
"# Training Details",
"## Training Data\n\nThe model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.\n\nThe model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).\nThereby, the following datasets were being used for (1.) and (2.):\n\n1. Datasets used for Unsupervised denoising objective:\n\n- C4\n- Wiki-DPR\n\n\n2. Datasets used for Supervised text-to-text language modeling objective\n\n- Sentence acceptability judgment\n - CoLA Warstadt et al., 2018\n- Sentiment analysis \n - SST-2 Socher et al., 2013\n- Paraphrasing/sentence similarity\n - MRPC Dolan and Brockett, 2005\n - STS-B Ceret al., 2017\n - QQP Iyer et al., 2017\n- Natural language inference\n - MNLI Williams et al., 2017\n - QNLI Rajpurkar et al.,2016\n - RTE Dagan et al., 2005 \n - CB De Marneff et al., 2019\n- Sentence completion\n - COPA Roemmele et al., 2011\n- Word sense disambiguation\n - WIC Pilehvar and Camacho-Collados, 2018\n- Question answering\n - MultiRC Khashabi et al., 2018\n - ReCoRD Zhang et al., 2018\n - BoolQ Clark et al., 2019",
"## Training Procedure\n\nIn their abstract, the model developers write: \n\n> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. \n\nThe framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nThe developers evaluated the model on 24 tasks, see the research paper for full details.",
"## Results \n\nFor full results for T5-Base, see the research paper, Table 14.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: Google Cloud TPU Pods\n- Hours used: More information needed\n- Cloud Provider: GCP\n- Compute Region: More information needed\n- Carbon Emitted: More information needed\n\nBibTeX:\n\n\n\nAPA:\n- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\nSee the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples.\n</details>"
] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Model Card for T5 Base\n\n!model image",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training Details\n5. Evaluation\n6. Environmental Impact\n7. Citation\n8. Model Card Authors\n9. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe developers of the Text-To-Text Transfer Transformer (T5) write: \n\n> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.\n\nT5-Base is the checkpoint with 220 million parameters. \n\n- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See associated paper and GitHub repo\n- Model type: Language model\n- Language(s) (NLP): English, French, Romanian, German\n- License: Apache 2.0\n- Related Models: All T5 Checkpoints\n- Resources for more information:\n - Research paper\n - Google's T5 Blog Post \n - GitHub Repo\n - Hugging Face T5 Docs",
"# Uses",
"## Direct Use and Downstream Use\n\nThe developers write in a blog post that the model: \n\n> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.\n\nSee the blog post and research paper for further details.",
"## Out-of-Scope Use\n\nMore information needed.",
"# Bias, Risks, and Limitations\n\nMore information needed.",
"## Recommendations\n\nMore information needed.",
"# Training Details",
"## Training Data\n\nThe model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.\n\nThe model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).\nThereby, the following datasets were being used for (1.) and (2.):\n\n1. Datasets used for Unsupervised denoising objective:\n\n- C4\n- Wiki-DPR\n\n\n2. Datasets used for Supervised text-to-text language modeling objective\n\n- Sentence acceptability judgment\n - CoLA Warstadt et al., 2018\n- Sentiment analysis \n - SST-2 Socher et al., 2013\n- Paraphrasing/sentence similarity\n - MRPC Dolan and Brockett, 2005\n - STS-B Ceret al., 2017\n - QQP Iyer et al., 2017\n- Natural language inference\n - MNLI Williams et al., 2017\n - QNLI Rajpurkar et al.,2016\n - RTE Dagan et al., 2005 \n - CB De Marneff et al., 2019\n- Sentence completion\n - COPA Roemmele et al., 2011\n- Word sense disambiguation\n - WIC Pilehvar and Camacho-Collados, 2018\n- Question answering\n - MultiRC Khashabi et al., 2018\n - ReCoRD Zhang et al., 2018\n - BoolQ Clark et al., 2019",
"## Training Procedure\n\nIn their abstract, the model developers write: \n\n> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. \n\nThe framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nThe developers evaluated the model on 24 tasks, see the research paper for full details.",
"## Results \n\nFor full results for T5-Base, see the research paper, Table 14.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: Google Cloud TPU Pods\n- Hours used: More information needed\n- Cloud Provider: GCP\n- Compute Region: More information needed\n- Carbon Emitted: More information needed\n\nBibTeX:\n\n\n\nAPA:\n- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\nSee the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples.\n</details>"
] |
translation | transformers |
# Model Card for T5 Large
![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Citation](#citation)
8. [Model Card Authors](#model-card-authors)
9. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html):
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-Large is the checkpoint with 770 million parameters.
- **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
- **Model type:** Language model
- **Language(s) (NLP):** English, French, Romanian, German
- **License:** Apache 2.0
- **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5)
- **Resources for more information:**
- [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
- [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
- [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer)
- [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5)
# Uses
## Direct Use and Downstream Use
The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Recommendations
More information needed.
# Training Details
## Training Data
The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
Thereby, the following datasets were being used for (1.) and (2.):
1. **Datasets used for Unsupervised denoising objective**:
- [C4](https://huggingface.co/datasets/c4)
- [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr)
2. **Datasets used for Supervised text-to-text language modeling objective**
- Sentence acceptability judgment
- CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471)
- Sentiment analysis
- SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)
- Paraphrasing/sentence similarity
- MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002)
- STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055)
- QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
- Natural language inference
- MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426)
- QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250)
- RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9)
- CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf)
- Sentence completion
- COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning)
- Word sense disambiguation
- WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121)
- Question answering
- MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023)
- ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885)
- BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044)
## Training Procedure
In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write:
> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
# Evaluation
## Testing Data, Factors & Metrics
The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details.
## Results
For full results for T5-Large, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
```
**APA:**
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5Model
tokenizer = T5Tokenizer.from_pretrained("t5-large")
model = T5Model.from_pretrained("t5-large")
input_ids = tokenizer(
"Studies have been shown that owning a dog is good for you", return_tensors="pt"
).input_ids # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
# forward pass
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
```
See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more examples.
</details>
| {"language": ["en", "fr", "ro", "de", "multilingual"], "license": "apache-2.0", "tags": ["summarization", "translation"], "datasets": ["c4"]} | google-t5/t5-large | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:c4",
"arxiv:1805.12471",
"arxiv:1708.00055",
"arxiv:1704.05426",
"arxiv:1606.05250",
"arxiv:1808.09121",
"arxiv:1810.12885",
"arxiv:1905.10044",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1805.12471",
"1708.00055",
"1704.05426",
"1606.05250",
"1808.09121",
"1810.12885",
"1905.10044",
"1910.09700"
] | [
"en",
"fr",
"ro",
"de",
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Model Card for T5 Large
!model image
# Table of Contents
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training Details
5. Evaluation
6. Environmental Impact
7. Citation
8. Model Card Authors
9. How To Get Started With the Model
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) write:
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-Large is the checkpoint with 770 million parameters.
- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See associated paper and GitHub repo
- Model type: Language model
- Language(s) (NLP): English, French, Romanian, German
- License: Apache 2.0
- Related Models: All T5 Checkpoints
- Resources for more information:
- Research paper
- Google's T5 Blog Post
- GitHub Repo
- Hugging Face T5 Docs
# Uses
## Direct Use and Downstream Use
The developers write in a blog post that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
See the blog post and research paper for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Recommendations
More information needed.
# Training Details
## Training Data
The model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.
The model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).
Thereby, the following datasets were being used for (1.) and (2.):
1. Datasets used for Unsupervised denoising objective:
- C4
- Wiki-DPR
2. Datasets used for Supervised text-to-text language modeling objective
- Sentence acceptability judgment
- CoLA Warstadt et al., 2018
- Sentiment analysis
- SST-2 Socher et al., 2013
- Paraphrasing/sentence similarity
- MRPC Dolan and Brockett, 2005
- STS-B Ceret al., 2017
- QQP Iyer et al., 2017
- Natural language inference
- MNLI Williams et al., 2017
- QNLI Rajpurkar et al.,2016
- RTE Dagan et al., 2005
- CB De Marneff et al., 2019
- Sentence completion
- COPA Roemmele et al., 2011
- Word sense disambiguation
- WIC Pilehvar and Camacho-Collados, 2018
- Question answering
- MultiRC Khashabi et al., 2018
- ReCoRD Zhang et al., 2018
- BoolQ Clark et al., 2019
## Training Procedure
In their abstract, the model developers write:
> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details.
# Evaluation
## Testing Data, Factors & Metrics
The developers evaluated the model on 24 tasks, see the research paper for full details.
## Results
For full results for T5-Large, see the research paper, Table 14.
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: Google Cloud TPU Pods
- Hours used: More information needed
- Cloud Provider: GCP
- Compute Region: More information needed
- Carbon Emitted: More information needed
BibTeX:
APA:
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
See the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples.
</details>
| [
"# Model Card for T5 Large\n\n!model image",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training Details\n5. Evaluation\n6. Environmental Impact\n7. Citation\n8. Model Card Authors\n9. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe developers of the Text-To-Text Transfer Transformer (T5) write: \n\n> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.\n\nT5-Large is the checkpoint with 770 million parameters. \n\n- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See associated paper and GitHub repo\n- Model type: Language model\n- Language(s) (NLP): English, French, Romanian, German\n- License: Apache 2.0\n- Related Models: All T5 Checkpoints\n- Resources for more information:\n - Research paper\n - Google's T5 Blog Post \n - GitHub Repo\n - Hugging Face T5 Docs",
"# Uses",
"## Direct Use and Downstream Use\n\nThe developers write in a blog post that the model: \n\n> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.\n\nSee the blog post and research paper for further details.",
"## Out-of-Scope Use\n\nMore information needed.",
"# Bias, Risks, and Limitations\n\nMore information needed.",
"## Recommendations\n\nMore information needed.",
"# Training Details",
"## Training Data\n\nThe model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.\n\nThe model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).\nThereby, the following datasets were being used for (1.) and (2.):\n\n1. Datasets used for Unsupervised denoising objective:\n\n- C4\n- Wiki-DPR\n\n\n2. Datasets used for Supervised text-to-text language modeling objective\n\n- Sentence acceptability judgment\n - CoLA Warstadt et al., 2018\n- Sentiment analysis \n - SST-2 Socher et al., 2013\n- Paraphrasing/sentence similarity\n - MRPC Dolan and Brockett, 2005\n - STS-B Ceret al., 2017\n - QQP Iyer et al., 2017\n- Natural language inference\n - MNLI Williams et al., 2017\n - QNLI Rajpurkar et al.,2016\n - RTE Dagan et al., 2005 \n - CB De Marneff et al., 2019\n- Sentence completion\n - COPA Roemmele et al., 2011\n- Word sense disambiguation\n - WIC Pilehvar and Camacho-Collados, 2018\n- Question answering\n - MultiRC Khashabi et al., 2018\n - ReCoRD Zhang et al., 2018\n - BoolQ Clark et al., 2019",
"## Training Procedure\n\nIn their abstract, the model developers write: \n\n> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. \n\nThe framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nThe developers evaluated the model on 24 tasks, see the research paper for full details.",
"## Results \n\nFor full results for T5-Large, see the research paper, Table 14.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: Google Cloud TPU Pods\n- Hours used: More information needed\n- Cloud Provider: GCP\n- Compute Region: More information needed\n- Carbon Emitted: More information needed\n\nBibTeX:\n\n\n\nAPA:\n- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\nSee the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples.\n</details>"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Model Card for T5 Large\n\n!model image",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training Details\n5. Evaluation\n6. Environmental Impact\n7. Citation\n8. Model Card Authors\n9. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe developers of the Text-To-Text Transfer Transformer (T5) write: \n\n> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.\n\nT5-Large is the checkpoint with 770 million parameters. \n\n- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See associated paper and GitHub repo\n- Model type: Language model\n- Language(s) (NLP): English, French, Romanian, German\n- License: Apache 2.0\n- Related Models: All T5 Checkpoints\n- Resources for more information:\n - Research paper\n - Google's T5 Blog Post \n - GitHub Repo\n - Hugging Face T5 Docs",
"# Uses",
"## Direct Use and Downstream Use\n\nThe developers write in a blog post that the model: \n\n> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.\n\nSee the blog post and research paper for further details.",
"## Out-of-Scope Use\n\nMore information needed.",
"# Bias, Risks, and Limitations\n\nMore information needed.",
"## Recommendations\n\nMore information needed.",
"# Training Details",
"## Training Data\n\nThe model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.\n\nThe model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).\nThereby, the following datasets were being used for (1.) and (2.):\n\n1. Datasets used for Unsupervised denoising objective:\n\n- C4\n- Wiki-DPR\n\n\n2. Datasets used for Supervised text-to-text language modeling objective\n\n- Sentence acceptability judgment\n - CoLA Warstadt et al., 2018\n- Sentiment analysis \n - SST-2 Socher et al., 2013\n- Paraphrasing/sentence similarity\n - MRPC Dolan and Brockett, 2005\n - STS-B Ceret al., 2017\n - QQP Iyer et al., 2017\n- Natural language inference\n - MNLI Williams et al., 2017\n - QNLI Rajpurkar et al.,2016\n - RTE Dagan et al., 2005 \n - CB De Marneff et al., 2019\n- Sentence completion\n - COPA Roemmele et al., 2011\n- Word sense disambiguation\n - WIC Pilehvar and Camacho-Collados, 2018\n- Question answering\n - MultiRC Khashabi et al., 2018\n - ReCoRD Zhang et al., 2018\n - BoolQ Clark et al., 2019",
"## Training Procedure\n\nIn their abstract, the model developers write: \n\n> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. \n\nThe framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nThe developers evaluated the model on 24 tasks, see the research paper for full details.",
"## Results \n\nFor full results for T5-Large, see the research paper, Table 14.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: Google Cloud TPU Pods\n- Hours used: More information needed\n- Cloud Provider: GCP\n- Compute Region: More information needed\n- Carbon Emitted: More information needed\n\nBibTeX:\n\n\n\nAPA:\n- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\nSee the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples.\n</details>"
] |
translation | transformers |
# Model Card for T5 Small
![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Citation](#citation)
8. [Model Card Authors](#model-card-authors)
9. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html):
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-Small is the checkpoint with 60 million parameters.
- **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
- **Model type:** Language model
- **Language(s) (NLP):** English, French, Romanian, German
- **License:** Apache 2.0
- **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5)
- **Resources for more information:**
- [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
- [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
- [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer)
- [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5)
# Uses
## Direct Use and Downstream Use
The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Recommendations
More information needed.
# Training Details
## Training Data
The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
Thereby, the following datasets were being used for (1.) and (2.):
1. **Datasets used for Unsupervised denoising objective**:
- [C4](https://huggingface.co/datasets/c4)
- [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr)
2. **Datasets used for Supervised text-to-text language modeling objective**
- Sentence acceptability judgment
- CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471)
- Sentiment analysis
- SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)
- Paraphrasing/sentence similarity
- MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002)
- STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055)
- QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
- Natural language inference
- MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426)
- QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250)
- RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9)
- CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf)
- Sentence completion
- COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning)
- Word sense disambiguation
- WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121)
- Question answering
- MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023)
- ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885)
- BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044)
## Training Procedure
In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write:
> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
# Evaluation
## Testing Data, Factors & Metrics
The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details.
## Results
For full results for T5-small, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
```
**APA:**
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5Model
tokenizer = T5Tokenizer.from_pretrained("t5-small")
model = T5Model.from_pretrained("t5-small")
input_ids = tokenizer(
"Studies have been shown that owning a dog is good for you", return_tensors="pt"
).input_ids # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
# forward pass
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
```
See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more examples.
</details>
| {"language": ["en", "fr", "ro", "de", "multilingual"], "license": "apache-2.0", "tags": ["summarization", "translation"], "datasets": ["c4"]} | google-t5/t5-small | null | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"onnx",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:c4",
"arxiv:1805.12471",
"arxiv:1708.00055",
"arxiv:1704.05426",
"arxiv:1606.05250",
"arxiv:1808.09121",
"arxiv:1810.12885",
"arxiv:1905.10044",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1805.12471",
"1708.00055",
"1704.05426",
"1606.05250",
"1808.09121",
"1810.12885",
"1905.10044",
"1910.09700"
] | [
"en",
"fr",
"ro",
"de",
"multilingual"
] | TAGS
#transformers #pytorch #tf #jax #rust #onnx #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Model Card for T5 Small
!model image
# Table of Contents
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training Details
5. Evaluation
6. Environmental Impact
7. Citation
8. Model Card Authors
9. How To Get Started With the Model
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) write:
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-Small is the checkpoint with 60 million parameters.
- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See associated paper and GitHub repo
- Model type: Language model
- Language(s) (NLP): English, French, Romanian, German
- License: Apache 2.0
- Related Models: All T5 Checkpoints
- Resources for more information:
- Research paper
- Google's T5 Blog Post
- GitHub Repo
- Hugging Face T5 Docs
# Uses
## Direct Use and Downstream Use
The developers write in a blog post that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
See the blog post and research paper for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Recommendations
More information needed.
# Training Details
## Training Data
The model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.
The model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).
Thereby, the following datasets were being used for (1.) and (2.):
1. Datasets used for Unsupervised denoising objective:
- C4
- Wiki-DPR
2. Datasets used for Supervised text-to-text language modeling objective
- Sentence acceptability judgment
- CoLA Warstadt et al., 2018
- Sentiment analysis
- SST-2 Socher et al., 2013
- Paraphrasing/sentence similarity
- MRPC Dolan and Brockett, 2005
- STS-B Ceret al., 2017
- QQP Iyer et al., 2017
- Natural language inference
- MNLI Williams et al., 2017
- QNLI Rajpurkar et al.,2016
- RTE Dagan et al., 2005
- CB De Marneff et al., 2019
- Sentence completion
- COPA Roemmele et al., 2011
- Word sense disambiguation
- WIC Pilehvar and Camacho-Collados, 2018
- Question answering
- MultiRC Khashabi et al., 2018
- ReCoRD Zhang et al., 2018
- BoolQ Clark et al., 2019
## Training Procedure
In their abstract, the model developers write:
> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details.
# Evaluation
## Testing Data, Factors & Metrics
The developers evaluated the model on 24 tasks, see the research paper for full details.
## Results
For full results for T5-small, see the research paper, Table 14.
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: Google Cloud TPU Pods
- Hours used: More information needed
- Cloud Provider: GCP
- Compute Region: More information needed
- Carbon Emitted: More information needed
BibTeX:
APA:
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
See the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples.
</details>
| [
"# Model Card for T5 Small\n\n!model image",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training Details\n5. Evaluation\n6. Environmental Impact\n7. Citation\n8. Model Card Authors\n9. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe developers of the Text-To-Text Transfer Transformer (T5) write: \n\n> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.\n\nT5-Small is the checkpoint with 60 million parameters. \n\n- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See associated paper and GitHub repo\n- Model type: Language model\n- Language(s) (NLP): English, French, Romanian, German\n- License: Apache 2.0\n- Related Models: All T5 Checkpoints\n- Resources for more information:\n - Research paper\n - Google's T5 Blog Post \n - GitHub Repo\n - Hugging Face T5 Docs",
"# Uses",
"## Direct Use and Downstream Use\n\nThe developers write in a blog post that the model: \n\n> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.\n\nSee the blog post and research paper for further details.",
"## Out-of-Scope Use\n\nMore information needed.",
"# Bias, Risks, and Limitations\n\nMore information needed.",
"## Recommendations\n\nMore information needed.",
"# Training Details",
"## Training Data\n\nThe model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.\n\nThe model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).\nThereby, the following datasets were being used for (1.) and (2.):\n\n1. Datasets used for Unsupervised denoising objective:\n\n- C4\n- Wiki-DPR\n\n\n2. Datasets used for Supervised text-to-text language modeling objective\n\n- Sentence acceptability judgment\n - CoLA Warstadt et al., 2018\n- Sentiment analysis \n - SST-2 Socher et al., 2013\n- Paraphrasing/sentence similarity\n - MRPC Dolan and Brockett, 2005\n - STS-B Ceret al., 2017\n - QQP Iyer et al., 2017\n- Natural language inference\n - MNLI Williams et al., 2017\n - QNLI Rajpurkar et al.,2016\n - RTE Dagan et al., 2005 \n - CB De Marneff et al., 2019\n- Sentence completion\n - COPA Roemmele et al., 2011\n- Word sense disambiguation\n - WIC Pilehvar and Camacho-Collados, 2018\n- Question answering\n - MultiRC Khashabi et al., 2018\n - ReCoRD Zhang et al., 2018\n - BoolQ Clark et al., 2019",
"## Training Procedure\n\nIn their abstract, the model developers write: \n\n> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. \n\nThe framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nThe developers evaluated the model on 24 tasks, see the research paper for full details.",
"## Results \n\nFor full results for T5-small, see the research paper, Table 14.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: Google Cloud TPU Pods\n- Hours used: More information needed\n- Cloud Provider: GCP\n- Compute Region: More information needed\n- Carbon Emitted: More information needed\n\nBibTeX:\n\n\n\nAPA:\n- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\nSee the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples.\n</details>"
] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #onnx #safetensors #t5 #text2text-generation #summarization #translation #en #fr #ro #de #multilingual #dataset-c4 #arxiv-1805.12471 #arxiv-1708.00055 #arxiv-1704.05426 #arxiv-1606.05250 #arxiv-1808.09121 #arxiv-1810.12885 #arxiv-1905.10044 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Model Card for T5 Small\n\n!model image",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training Details\n5. Evaluation\n6. Environmental Impact\n7. Citation\n8. Model Card Authors\n9. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe developers of the Text-To-Text Transfer Transformer (T5) write: \n\n> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.\n\nT5-Small is the checkpoint with 60 million parameters. \n\n- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See associated paper and GitHub repo\n- Model type: Language model\n- Language(s) (NLP): English, French, Romanian, German\n- License: Apache 2.0\n- Related Models: All T5 Checkpoints\n- Resources for more information:\n - Research paper\n - Google's T5 Blog Post \n - GitHub Repo\n - Hugging Face T5 Docs",
"# Uses",
"## Direct Use and Downstream Use\n\nThe developers write in a blog post that the model: \n\n> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.\n\nSee the blog post and research paper for further details.",
"## Out-of-Scope Use\n\nMore information needed.",
"# Bias, Risks, and Limitations\n\nMore information needed.",
"## Recommendations\n\nMore information needed.",
"# Training Details",
"## Training Data\n\nThe model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.\n\nThe model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).\nThereby, the following datasets were being used for (1.) and (2.):\n\n1. Datasets used for Unsupervised denoising objective:\n\n- C4\n- Wiki-DPR\n\n\n2. Datasets used for Supervised text-to-text language modeling objective\n\n- Sentence acceptability judgment\n - CoLA Warstadt et al., 2018\n- Sentiment analysis \n - SST-2 Socher et al., 2013\n- Paraphrasing/sentence similarity\n - MRPC Dolan and Brockett, 2005\n - STS-B Ceret al., 2017\n - QQP Iyer et al., 2017\n- Natural language inference\n - MNLI Williams et al., 2017\n - QNLI Rajpurkar et al.,2016\n - RTE Dagan et al., 2005 \n - CB De Marneff et al., 2019\n- Sentence completion\n - COPA Roemmele et al., 2011\n- Word sense disambiguation\n - WIC Pilehvar and Camacho-Collados, 2018\n- Question answering\n - MultiRC Khashabi et al., 2018\n - ReCoRD Zhang et al., 2018\n - BoolQ Clark et al., 2019",
"## Training Procedure\n\nIn their abstract, the model developers write: \n\n> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. \n\nThe framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nThe developers evaluated the model on 24 tasks, see the research paper for full details.",
"## Results \n\nFor full results for T5-small, see the research paper, Table 14.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: Google Cloud TPU Pods\n- Hours used: More information needed\n- Cloud Provider: GCP\n- Compute Region: More information needed\n- Carbon Emitted: More information needed\n\nBibTeX:\n\n\n\nAPA:\n- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\nSee the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples.\n</details>"
] |
text-generation | transformers |
# Transfo-xl-wt103
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
**Model Description:**
The Transformer-XL model is a causal (uni-directional) transformer with relative positioning (sinusoïdal) embeddings which can reuse previously computed hidden-states to attend to longer context (memory). This model also uses adaptive softmax inputs and outputs (tied).
- **Developed by:** [Zihang Dai]([email protected]), [Zhilin Yang]([email protected]), [Yiming Yang1]([email protected]), [Jaime Carbonell]([email protected]), [Quoc V. Le]([email protected]), [Ruslan Salakhutdinov]([email protected])
- **Shared by:** HuggingFace team
- **Model Type:** Text Generation
- **Language(s):** English
- **License:** [More information needed]
- **Resources for more information:**
- [Research Paper](https://arxiv.org/pdf/1901.02860.pdf)
- [GitHub Repo](https://github.com/kimiyoung/transformer-xl)
- [HuggingFace Documentation](https://huggingface.co/docs/transformers/model_doc/transfo-xl#transformers.TransfoXLModel)
## Uses
#### Direct Use
This model can be used for text generation.
The authors provide additionally notes about the vocabulary used, in the [associated paper](https://arxiv.org/pdf/1901.02860.pdf):
> We envision interesting applications of Transformer-XL in the fields of text generation, unsupervised feature learning, image and speech modeling.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Training
#### Training Data
The authors provide additionally notes about the vocabulary used, in the [associated paper](https://arxiv.org/pdf/1901.02860.pdf):
> best model trained the Wikitext-103 dataset. We seed the our Transformer-XL with a context of at most 512 consecutive tokens randomly sampled from the test set of Wikitext-103. Then, we run Transformer-XL to generate a pre-defined number of tokens (500 or 1,000 in our case). For each generation step, we first find the top-40 probabilities of the next-step distribution and sample from top-40 tokens based on the re-normalized distribution. To help reading, we detokenize the context, the generated text and the reference text.
The authors use the following pretraining corpora for the model, described in the [associated paper](https://arxiv.org/pdf/1901.02860.pdf):
- WikiText-103 (Merity et al., 2016),
#### Training Procedure
##### Preprocessing
The authors provide additionally notes about the training procedure used, in the [associated paper](https://arxiv.org/pdf/1901.02860.pdf):
> Similar to but different from enwik8, text8 con- tains 100M processed Wikipedia characters cre- ated by lowering case the text and removing any character other than the 26 letters a through z, and space. Due to the similarity, we simply adapt the best model and the same hyper-parameters on en- wik8 to text8 without further tuning.
## Evaluation
#### Results
| Method | enwiki8 |text8 | One Billion Word | WT-103 | PTB (w/o finetuning) |
|:--------------------:|---------:|:----:|:----------------:|:------:|:--------------------:|
| Transformer-XL. | 0.99 | 1.08 | 21.8 | 18.3 | 54.5 |
## Citation Information
```bibtex
@misc{https://doi.org/10.48550/arxiv.1901.02860,
doi = {10.48550/ARXIV.1901.02860},
url = {https://arxiv.org/abs/1901.02860},
author = {Dai, Zihang and Yang, Zhilin and Yang, Yiming and Carbonell, Jaime and Le, Quoc V. and Salakhutdinov, Ruslan},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context},
publisher = {arXiv},
year = {2019},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
```
## How to Get Started With the Model
```
from transformers import TransfoXLTokenizer, TransfoXLModel
import torch
tokenizer = TransfoXLTokenizer.from_pretrained("transfo-xl-wt103")
model = TransfoXLModel.from_pretrained("transfo-xl-wt103")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
| {"language": "en", "tags": ["text-generation"], "datasets": ["wikitext-103"], "task": {"name": "Text Generation", "type": "text-generation"}, "model-index": [{"name": "transfo-xl-wt103", "results": []}]} | transfo-xl/transfo-xl-wt103 | null | [
"transformers",
"pytorch",
"tf",
"transfo-xl",
"text-generation",
"en",
"dataset:wikitext-103",
"arxiv:1901.02860",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1901.02860"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #transfo-xl #text-generation #en #dataset-wikitext-103 #arxiv-1901.02860 #autotrain_compatible #endpoints_compatible #region-us
| Transfo-xl-wt103
================
Table of Contents
-----------------
* Model Details
* Uses
* Risks, Limitations and Biases
* Training
* Evaluation
* Citation Information
* How to Get Started With the Model
Model Details
-------------
Model Description:
The Transformer-XL model is a causal (uni-directional) transformer with relative positioning (sinusoïdal) embeddings which can reuse previously computed hidden-states to attend to longer context (memory). This model also uses adaptive softmax inputs and outputs (tied).
* Developed by: Zihang Dai, Zhilin Yang, Yiming Yang1, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov
* Shared by: HuggingFace team
* Model Type: Text Generation
* Language(s): English
* License: [More information needed]
* Resources for more information:
+ Research Paper
+ GitHub Repo
+ HuggingFace Documentation
Uses
----
#### Direct Use
This model can be used for text generation.
The authors provide additionally notes about the vocabulary used, in the associated paper:
>
> We envision interesting applications of Transformer-XL in the fields of text generation, unsupervised feature learning, image and speech modeling.
>
>
>
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
Risks, Limitations and Biases
-----------------------------
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
Training
--------
#### Training Data
The authors provide additionally notes about the vocabulary used, in the associated paper:
>
> best model trained the Wikitext-103 dataset. We seed the our Transformer-XL with a context of at most 512 consecutive tokens randomly sampled from the test set of Wikitext-103. Then, we run Transformer-XL to generate a pre-defined number of tokens (500 or 1,000 in our case). For each generation step, we first find the top-40 probabilities of the next-step distribution and sample from top-40 tokens based on the re-normalized distribution. To help reading, we detokenize the context, the generated text and the reference text.
>
>
>
The authors use the following pretraining corpora for the model, described in the associated paper:
* WikiText-103 (Merity et al., 2016),
#### Training Procedure
##### Preprocessing
The authors provide additionally notes about the training procedure used, in the associated paper:
>
> Similar to but different from enwik8, text8 con- tains 100M processed Wikipedia characters cre- ated by lowering case the text and removing any character other than the 26 letters a through z, and space. Due to the similarity, we simply adapt the best model and the same hyper-parameters on en- wik8 to text8 without further tuning.
>
>
>
Evaluation
----------
#### Results
How to Get Started With the Model
---------------------------------
| [
"#### Direct Use\n\n\nThis model can be used for text generation.\nThe authors provide additionally notes about the vocabulary used, in the associated paper:\n\n\n\n> \n> We envision interesting applications of Transformer-XL in the fields of text generation, unsupervised feature learning, image and speech modeling.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe authors provide additionally notes about the vocabulary used, in the associated paper:\n\n\n\n> \n> best model trained the Wikitext-103 dataset. We seed the our Transformer-XL with a context of at most 512 consecutive tokens randomly sampled from the test set of Wikitext-103. Then, we run Transformer-XL to generate a pre-defined number of tokens (500 or 1,000 in our case). For each generation step, we first find the top-40 probabilities of the next-step distribution and sample from top-40 tokens based on the re-normalized distribution. To help reading, we detokenize the context, the generated text and the reference text.\n> \n> \n> \n\n\nThe authors use the following pretraining corpora for the model, described in the associated paper:\n\n\n* WikiText-103 (Merity et al., 2016),",
"#### Training Procedure",
"##### Preprocessing\n\n\nThe authors provide additionally notes about the training procedure used, in the associated paper:\n\n\n\n> \n> Similar to but different from enwik8, text8 con- tains 100M processed Wikipedia characters cre- ated by lowering case the text and removing any character other than the 26 letters a through z, and space. Due to the similarity, we simply adapt the best model and the same hyper-parameters on en- wik8 to text8 without further tuning.\n> \n> \n> \n\n\nEvaluation\n----------",
"#### Results\n\n\n\nHow to Get Started With the Model\n---------------------------------"
] | [
"TAGS\n#transformers #pytorch #tf #transfo-xl #text-generation #en #dataset-wikitext-103 #arxiv-1901.02860 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### Direct Use\n\n\nThis model can be used for text generation.\nThe authors provide additionally notes about the vocabulary used, in the associated paper:\n\n\n\n> \n> We envision interesting applications of Transformer-XL in the fields of text generation, unsupervised feature learning, image and speech modeling.\n> \n> \n>",
"#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe authors provide additionally notes about the vocabulary used, in the associated paper:\n\n\n\n> \n> best model trained the Wikitext-103 dataset. We seed the our Transformer-XL with a context of at most 512 consecutive tokens randomly sampled from the test set of Wikitext-103. Then, we run Transformer-XL to generate a pre-defined number of tokens (500 or 1,000 in our case). For each generation step, we first find the top-40 probabilities of the next-step distribution and sample from top-40 tokens based on the re-normalized distribution. To help reading, we detokenize the context, the generated text and the reference text.\n> \n> \n> \n\n\nThe authors use the following pretraining corpora for the model, described in the associated paper:\n\n\n* WikiText-103 (Merity et al., 2016),",
"#### Training Procedure",
"##### Preprocessing\n\n\nThe authors provide additionally notes about the training procedure used, in the associated paper:\n\n\n\n> \n> Similar to but different from enwik8, text8 con- tains 100M processed Wikipedia characters cre- ated by lowering case the text and removing any character other than the 26 letters a through z, and space. Due to the similarity, we simply adapt the best model and the same hyper-parameters on en- wik8 to text8 without further tuning.\n> \n> \n> \n\n\nEvaluation\n----------",
"#### Results\n\n\n\nHow to Get Started With the Model\n---------------------------------"
] |
fill-mask | transformers |
# xlm-clm-ende-1024
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample, Alexis Conneau. xlm-clm-ende-1024 is a transformer pretrained using a causal language modeling (CLM) objective (next token prediction) for English-German.
## Model Description
- **Developed by:** Guillaume Lample, Alexis Conneau, see [associated paper](https://arxiv.org/abs/1901.07291)
- **Model type:** Language model
- **Language(s) (NLP):** English-German
- **License:** Unknown
- **Related Models:** [xlm-clm-enfr-1024](https://huggingface.co/xlm-clm-enfr-1024), [xlm-mlm-ende-1024](https://huggingface.co/xlm-mlm-ende-1024), [xlm-mlm-enfr-1024](https://huggingface.co/xlm-mlm-enfr-1024), [xlm-mlm-enro-1024](https://huggingface.co/xlm-mlm-enro-1024)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo](https://github.com/facebookresearch/XLM)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for causal language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for details on the training data and training procedure.
# Evaluation
## Testing Data, Factors & Metrics
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for details on the testing data, factors and metrics.
## Results
For xlm-clm-ende-1024 results, see Table 2 of the [associated paper](https://arxiv.org/pdf/1901.07291.pdf).
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
The model developers write:
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details.
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
import torch
from transformers import XLMTokenizer, XLMWithLMHeadModel
tokenizer = XLMTokenizer.from_pretrained("xlm-clm-ende-1024")
model = XLMWithLMHeadModel.from_pretrained("xlm-clm-ende-1024")
input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")]) # batch size of 1
language_id = tokenizer.lang2id["en"] # 0
langs = torch.tensor([language_id] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0])
# We reshape it to be of size (batch_size, sequence_length)
langs = langs.view(1, -1) # is now of shape [1, sequence_length] (we have a batch size of 1)
outputs = model(input_ids, langs=langs)
```
</details> | {"language": ["multilingual", "en", "de"]} | FacebookAI/xlm-clm-ende-1024 | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm",
"fill-mask",
"multilingual",
"en",
"de",
"arxiv:1901.07291",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1901.07291",
"1910.09700"
] | [
"multilingual",
"en",
"de"
] | TAGS
#transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #de #arxiv-1901.07291 #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# xlm-clm-ende-1024
# Table of Contents
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training
5. Evaluation
6. Environmental Impact
7. Technical Specifications
8. Citation
9. Model Card Authors
10. How To Get Started With the Model
# Model Details
The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-clm-ende-1024 is a transformer pretrained using a causal language modeling (CLM) objective (next token prediction) for English-German.
## Model Description
- Developed by: Guillaume Lample, Alexis Conneau, see associated paper
- Model type: Language model
- Language(s) (NLP): English-German
- License: Unknown
- Related Models: xlm-clm-enfr-1024, xlm-mlm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-enro-1024
- Resources for more information:
- Associated paper
- GitHub Repo
- Hugging Face Multilingual Models for Inference docs
# Uses
## Direct Use
The model is a language model. The model can be used for causal language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face Multilingual Models for Inference docs.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
See the associated paper for details on the training data and training procedure.
# Evaluation
## Testing Data, Factors & Metrics
See the associated paper for details on the testing data, factors and metrics.
## Results
For xlm-clm-ende-1024 results, see Table 2 of the associated paper.
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: More information needed
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications
The model developers write:
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
See the associated paper for further details.
BibTeX:
APA:
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
</details> | [
"# xlm-clm-ende-1024",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details\n\nThe XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-clm-ende-1024 is a transformer pretrained using a causal language modeling (CLM) objective (next token prediction) for English-German.",
"## Model Description\n\n- Developed by: Guillaume Lample, Alexis Conneau, see associated paper\n- Model type: Language model\n- Language(s) (NLP): English-German\n- License: Unknown\n- Related Models: xlm-clm-enfr-1024, xlm-mlm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-enro-1024\n- Resources for more information: \n - Associated paper\n - GitHub Repo\n - Hugging Face Multilingual Models for Inference docs",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for causal language modeling.",
"## Downstream Use\n\nTo learn more about this task and potential downstream uses, see the Hugging Face Multilingual Models for Inference docs.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training\n\nSee the associated paper for details on the training data and training procedure.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nSee the associated paper for details on the testing data, factors and metrics.",
"## Results \n\nFor xlm-clm-ende-1024 results, see Table 2 of the associated paper.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nThe model developers write: \n\n> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.\n\nSee the associated paper for further details.\n\nBibTeX:\n\n\n\nAPA:\n- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.",
"# Model Card Authors \n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\n</details>"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #de #arxiv-1901.07291 #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# xlm-clm-ende-1024",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details\n\nThe XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-clm-ende-1024 is a transformer pretrained using a causal language modeling (CLM) objective (next token prediction) for English-German.",
"## Model Description\n\n- Developed by: Guillaume Lample, Alexis Conneau, see associated paper\n- Model type: Language model\n- Language(s) (NLP): English-German\n- License: Unknown\n- Related Models: xlm-clm-enfr-1024, xlm-mlm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-enro-1024\n- Resources for more information: \n - Associated paper\n - GitHub Repo\n - Hugging Face Multilingual Models for Inference docs",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for causal language modeling.",
"## Downstream Use\n\nTo learn more about this task and potential downstream uses, see the Hugging Face Multilingual Models for Inference docs.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training\n\nSee the associated paper for details on the training data and training procedure.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nSee the associated paper for details on the testing data, factors and metrics.",
"## Results \n\nFor xlm-clm-ende-1024 results, see Table 2 of the associated paper.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nThe model developers write: \n\n> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.\n\nSee the associated paper for further details.\n\nBibTeX:\n\n\n\nAPA:\n- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.",
"# Model Card Authors \n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\n</details>"
] |
fill-mask | transformers |
# xlm-clm-enfr-1024
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample, Alexis Conneau. xlm-clm-enfr-1024 is a transformer pretrained using a causal language modeling (CLM) objective (next token prediction) for English-French.
## Model Description
- **Developed by:** Guillaume Lample, Alexis Conneau, see [associated paper](https://arxiv.org/abs/1901.07291)
- **Model type:** Language model
- **Language(s) (NLP):** English-French
- **License:** Unknown
- **Related Models:** [xlm-clm-ende-1024](https://huggingface.co/xlm-clm-ende-1024), [xlm-mlm-ende-1024](https://huggingface.co/xlm-mlm-ende-1024), [xlm-mlm-enfr-1024](https://huggingface.co/xlm-mlm-enfr-1024), [xlm-mlm-enro-1024](https://huggingface.co/xlm-mlm-enro-1024)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo](https://github.com/facebookresearch/XLM)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for causal language modeling (next token prediction).
## Downstream Use
To learn more about this task and potential downstream uses, see the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for details on the training data and training procedure.
# Evaluation
## Testing Data, Factors & Metrics
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for details on the testing data, factors and metrics.
## Results
For xlm-clm-enfr-1024 results, see Table 2 of the [associated paper](https://arxiv.org/pdf/1901.07291.pdf).
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
The model developers write:
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details.
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
import torch
from transformers import XLMTokenizer, XLMWithLMHeadModel
tokenizer = XLMTokenizer.from_pretrained("xlm-clm-enfr-1024")
model = XLMWithLMHeadModel.from_pretrained("xlm-clm-enfr-1024")
input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")]) # batch size of 1
language_id = tokenizer.lang2id["en"] # 0
langs = torch.tensor([language_id] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0])
# We reshape it to be of size (batch_size, sequence_length)
langs = langs.view(1, -1) # is now of shape [1, sequence_length] (we have a batch size of 1)
outputs = model(input_ids, langs=langs)
```
</details> | {"language": ["multilingual", "en", "fr"]} | FacebookAI/xlm-clm-enfr-1024 | null | [
"transformers",
"pytorch",
"tf",
"xlm",
"fill-mask",
"multilingual",
"en",
"fr",
"arxiv:1901.07291",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1901.07291",
"1910.09700"
] | [
"multilingual",
"en",
"fr"
] | TAGS
#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #fr #arxiv-1901.07291 #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# xlm-clm-enfr-1024
# Table of Contents
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training
5. Evaluation
6. Environmental Impact
7. Technical Specifications
8. Citation
9. Model Card Authors
10. How To Get Started With the Model
# Model Details
The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-clm-enfr-1024 is a transformer pretrained using a causal language modeling (CLM) objective (next token prediction) for English-French.
## Model Description
- Developed by: Guillaume Lample, Alexis Conneau, see associated paper
- Model type: Language model
- Language(s) (NLP): English-French
- License: Unknown
- Related Models: xlm-clm-ende-1024, xlm-mlm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-enro-1024
- Resources for more information:
- Associated paper
- GitHub Repo
- Hugging Face Multilingual Models for Inference docs
# Uses
## Direct Use
The model is a language model. The model can be used for causal language modeling (next token prediction).
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face Multilingual Models for Inference docs.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
See the associated paper for details on the training data and training procedure.
# Evaluation
## Testing Data, Factors & Metrics
See the associated paper for details on the testing data, factors and metrics.
## Results
For xlm-clm-enfr-1024 results, see Table 2 of the associated paper.
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: More information needed
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications
The model developers write:
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
See the associated paper for further details.
BibTeX:
APA:
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
</details> | [
"# xlm-clm-enfr-1024",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details\n\nThe XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-clm-enfr-1024 is a transformer pretrained using a causal language modeling (CLM) objective (next token prediction) for English-French.",
"## Model Description\n\n- Developed by: Guillaume Lample, Alexis Conneau, see associated paper\n- Model type: Language model\n- Language(s) (NLP): English-French\n- License: Unknown\n- Related Models: xlm-clm-ende-1024, xlm-mlm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-enro-1024\n- Resources for more information: \n - Associated paper\n - GitHub Repo\n - Hugging Face Multilingual Models for Inference docs",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for causal language modeling (next token prediction).",
"## Downstream Use\n\nTo learn more about this task and potential downstream uses, see the Hugging Face Multilingual Models for Inference docs.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training\n\nSee the associated paper for details on the training data and training procedure.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nSee the associated paper for details on the testing data, factors and metrics.",
"## Results \n\nFor xlm-clm-enfr-1024 results, see Table 2 of the associated paper.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nThe model developers write: \n\n> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.\n\nSee the associated paper for further details.\n\nBibTeX:\n\n\n\nAPA:\n- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.",
"# Model Card Authors \n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\n</details>"
] | [
"TAGS\n#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #fr #arxiv-1901.07291 #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# xlm-clm-enfr-1024",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details\n\nThe XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-clm-enfr-1024 is a transformer pretrained using a causal language modeling (CLM) objective (next token prediction) for English-French.",
"## Model Description\n\n- Developed by: Guillaume Lample, Alexis Conneau, see associated paper\n- Model type: Language model\n- Language(s) (NLP): English-French\n- License: Unknown\n- Related Models: xlm-clm-ende-1024, xlm-mlm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-enro-1024\n- Resources for more information: \n - Associated paper\n - GitHub Repo\n - Hugging Face Multilingual Models for Inference docs",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for causal language modeling (next token prediction).",
"## Downstream Use\n\nTo learn more about this task and potential downstream uses, see the Hugging Face Multilingual Models for Inference docs.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training\n\nSee the associated paper for details on the training data and training procedure.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nSee the associated paper for details on the testing data, factors and metrics.",
"## Results \n\nFor xlm-clm-enfr-1024 results, see Table 2 of the associated paper.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nThe model developers write: \n\n> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.\n\nSee the associated paper for further details.\n\nBibTeX:\n\n\n\nAPA:\n- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.",
"# Model Card Authors \n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\n</details>"
] |
fill-mask | transformers |
# xlm-mlm-100-1280
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
xlm-mlm-100-1280 is the XLM model, which was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau, trained on Wikipedia text in 100 languages. The model is a transformer pretrained using a masked language modeling (MLM) objective.
## Model Description
- **Developed by:** See [associated paper](https://arxiv.org/abs/1901.07291) and [GitHub Repo](https://github.com/facebookresearch/XLM)
- **Model type:** Language model
- **Language(s) (NLP):** 100 languages, see [GitHub Repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) for full list.
- **License:** CC-BY-NC-4.0
- **Related Models:** [xlm-mlm-17-1280](https://huggingface.co/xlm-mlm-17-1280)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for masked language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face [fill mask docs](https://huggingface.co/tasks/fill-mask) and the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs. Also see the [associated paper](https://arxiv.org/abs/1901.07291).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
This model is the XLM model trained on Wikipedia text in 100 languages. The preprocessing included tokenization with byte-pair-encoding. See the [GitHub repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) and the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for further details on the training data and training procedure.
[Conneau et al. (2020)](https://arxiv.org/pdf/1911.02116.pdf) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7).
# Evaluation
## Testing Data, Factors & Metrics
The model developers evaluated the model on the XNLI cross-lingual classification task (see the [XNLI data card](https://huggingface.co/datasets/xnli) for more details on XNLI) using the metric of test accuracy. See the [GitHub Repo](https://arxiv.org/pdf/1911.02116.pdf) for further details on the testing data, factors and metrics.
## Results
For xlm-mlm-100-1280, the test accuracy on the XNLI cross-lingual classification task in English (en), Spanish (es), German (de), Arabic (ar), Chinese (zh) and Urdu (ur) are:
|Language| en | es | de | ar | zh | ur |
|:------:|:--:|:---:|:--:|:--:|:--:|:--:|
| |83.7|76.6 |73.6|67.4|71.7|62.9|
See the [GitHub repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
[Conneau et al. (2020)](https://arxiv.org/pdf/1911.02116.pdf) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7).
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
More information needed. See the [ipython notebook](https://github.com/facebookresearch/XLM/blob/main/generate-embeddings.ipynb) in the associated [GitHub repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) for examples. | {"language": ["multilingual", "en", "es", "fr", "de", "zh", "ru", "pt", "it", "ar", "ja", "id", "tr", "nl", "pl", "fa", "vi", "sv", "ko", "he", "ro", false, "hi", "uk", "cs", "fi", "hu", "th", "da", "ca", "el", "bg", "sr", "ms", "bn", "hr", "sl", "az", "sk", "eo", "ta", "sh", "lt", "et", "ml", "la", "bs", "sq", "arz", "af", "ka", "mr", "eu", "tl", "ang", "gl", "nn", "ur", "kk", "be", "hy", "te", "lv", "mk", "als", "is", "wuu", "my", "sco", "mn", "ceb", "ast", "cy", "kn", "br", "an", "gu", "bar", "uz", "lb", "ne", "si", "war", "jv", "ga", "oc", "ku", "sw", "nds", "ckb", "ia", "yi", "fy", "scn", "gan", "tt", "am"], "license": "cc-by-nc-4.0"} | FacebookAI/xlm-mlm-100-1280 | null | [
"transformers",
"pytorch",
"tf",
"xlm",
"fill-mask",
"multilingual",
"en",
"es",
"fr",
"de",
"zh",
"ru",
"pt",
"it",
"ar",
"ja",
"id",
"tr",
"nl",
"pl",
"fa",
"vi",
"sv",
"ko",
"he",
"ro",
"no",
"hi",
"uk",
"cs",
"fi",
"hu",
"th",
"da",
"ca",
"el",
"bg",
"sr",
"ms",
"bn",
"hr",
"sl",
"az",
"sk",
"eo",
"ta",
"sh",
"lt",
"et",
"ml",
"la",
"bs",
"sq",
"arz",
"af",
"ka",
"mr",
"eu",
"tl",
"ang",
"gl",
"nn",
"ur",
"kk",
"be",
"hy",
"te",
"lv",
"mk",
"als",
"is",
"wuu",
"my",
"sco",
"mn",
"ceb",
"ast",
"cy",
"kn",
"br",
"an",
"gu",
"bar",
"uz",
"lb",
"ne",
"si",
"war",
"jv",
"ga",
"oc",
"ku",
"sw",
"nds",
"ckb",
"ia",
"yi",
"fy",
"scn",
"gan",
"tt",
"am",
"arxiv:1901.07291",
"arxiv:1911.02116",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1901.07291",
"1911.02116",
"1910.09700"
] | [
"multilingual",
"en",
"es",
"fr",
"de",
"zh",
"ru",
"pt",
"it",
"ar",
"ja",
"id",
"tr",
"nl",
"pl",
"fa",
"vi",
"sv",
"ko",
"he",
"ro",
"no",
"hi",
"uk",
"cs",
"fi",
"hu",
"th",
"da",
"ca",
"el",
"bg",
"sr",
"ms",
"bn",
"hr",
"sl",
"az",
"sk",
"eo",
"ta",
"sh",
"lt",
"et",
"ml",
"la",
"bs",
"sq",
"arz",
"af",
"ka",
"mr",
"eu",
"tl",
"ang",
"gl",
"nn",
"ur",
"kk",
"be",
"hy",
"te",
"lv",
"mk",
"als",
"is",
"wuu",
"my",
"sco",
"mn",
"ceb",
"ast",
"cy",
"kn",
"br",
"an",
"gu",
"bar",
"uz",
"lb",
"ne",
"si",
"war",
"jv",
"ga",
"oc",
"ku",
"sw",
"nds",
"ckb",
"ia",
"yi",
"fy",
"scn",
"gan",
"tt",
"am"
] | TAGS
#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #es #fr #de #zh #ru #pt #it #ar #ja #id #tr #nl #pl #fa #vi #sv #ko #he #ro #no #hi #uk #cs #fi #hu #th #da #ca #el #bg #sr #ms #bn #hr #sl #az #sk #eo #ta #sh #lt #et #ml #la #bs #sq #arz #af #ka #mr #eu #tl #ang #gl #nn #ur #kk #be #hy #te #lv #mk #als #is #wuu #my #sco #mn #ceb #ast #cy #kn #br #an #gu #bar #uz #lb #ne #si #war #jv #ga #oc #ku #sw #nds #ckb #ia #yi #fy #scn #gan #tt #am #arxiv-1901.07291 #arxiv-1911.02116 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
| xlm-mlm-100-1280
================
Table of Contents
=================
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training
5. Evaluation
6. Environmental Impact
7. Technical Specifications
8. Citation
9. Model Card Authors
10. How To Get Started With the Model
Model Details
=============
xlm-mlm-100-1280 is the XLM model, which was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau, trained on Wikipedia text in 100 languages. The model is a transformer pretrained using a masked language modeling (MLM) objective.
Model Description
-----------------
* Developed by: See associated paper and GitHub Repo
* Model type: Language model
* Language(s) (NLP): 100 languages, see GitHub Repo for full list.
* License: CC-BY-NC-4.0
* Related Models: xlm-mlm-17-1280
* Resources for more information:
+ Associated paper
+ GitHub Repo
+ Hugging Face Multilingual Models for Inference docs
Uses
====
Direct Use
----------
The model is a language model. The model can be used for masked language modeling.
Downstream Use
--------------
To learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs. Also see the associated paper.
Out-of-Scope Use
----------------
The model should not be used to intentionally create hostile or alienating environments for people.
Bias, Risks, and Limitations
============================
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
Recommendations
---------------
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Training
========
This model is the XLM model trained on Wikipedia text in 100 languages. The preprocessing included tokenization with byte-pair-encoding. See the GitHub repo and the associated paper for further details on the training data and training procedure.
Conneau et al. (2020) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7).
Evaluation
==========
Testing Data, Factors & Metrics
-------------------------------
The model developers evaluated the model on the XNLI cross-lingual classification task (see the XNLI data card for more details on XNLI) using the metric of test accuracy. See the GitHub Repo for further details on the testing data, factors and metrics.
Results
-------
For xlm-mlm-100-1280, the test accuracy on the XNLI cross-lingual classification task in English (en), Spanish (es), German (de), Arabic (ar), Chinese (zh) and Urdu (ur) are:
See the GitHub repo for further details.
Environmental Impact
====================
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
* Hardware Type: More information needed
* Hours used: More information needed
* Cloud Provider: More information needed
* Compute Region: More information needed
* Carbon Emitted: More information needed
Technical Specifications
========================
Conneau et al. (2020) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7).
BibTeX:
APA:
* Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
Model Card Authors
==================
This model card was written by the team at Hugging Face.
How to Get Started with the Model
=================================
More information needed. See the ipython notebook in the associated GitHub repo for examples.
| [] | [
"TAGS\n#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #es #fr #de #zh #ru #pt #it #ar #ja #id #tr #nl #pl #fa #vi #sv #ko #he #ro #no #hi #uk #cs #fi #hu #th #da #ca #el #bg #sr #ms #bn #hr #sl #az #sk #eo #ta #sh #lt #et #ml #la #bs #sq #arz #af #ka #mr #eu #tl #ang #gl #nn #ur #kk #be #hy #te #lv #mk #als #is #wuu #my #sco #mn #ceb #ast #cy #kn #br #an #gu #bar #uz #lb #ne #si #war #jv #ga #oc #ku #sw #nds #ckb #ia #yi #fy #scn #gan #tt #am #arxiv-1901.07291 #arxiv-1911.02116 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers |
# xlm-mlm-17-1280
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
xlm-mlm-17-1280 is the XLM model, which was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau, trained on text in 17 languages. The model is a transformer pretrained using a masked language modeling (MLM) objective.
## Model Description
- **Developed by:** See [associated paper](https://arxiv.org/abs/1901.07291) and [GitHub Repo](https://github.com/facebookresearch/XLM)
- **Model type:** Language model
- **Language(s) (NLP):** 17 languages, see [GitHub Repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) for full list.
- **License:** CC-BY-NC-4.0
- **Related Models:** [xlm-mlm-17-1280](https://huggingface.co/xlm-mlm-17-1280)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for masked language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face [fill mask docs](https://huggingface.co/tasks/fill-mask) and the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs. Also see the [associated paper](https://arxiv.org/abs/1901.07291).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
This model is the XLM model trained on text in 17 languages. The preprocessing included tokenization and byte-pair-encoding. See the [GitHub repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) and the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for further details on the training data and training procedure.
[Conneau et al. (2020)](https://arxiv.org/pdf/1911.02116.pdf) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7).
# Evaluation
## Testing Data, Factors & Metrics
The model developers evaluated the model on the XNLI cross-lingual classification task (see the [XNLI data card](https://huggingface.co/datasets/xnli) for more details on XNLI) using the metric of test accuracy. See the [GitHub Repo](https://arxiv.org/pdf/1911.02116.pdf) for further details on the testing data, factors and metrics.
## Results
For xlm-mlm-17-1280, the test accuracy on the XNLI cross-lingual classification task in English (en), Spanish (es), German (de), Arabic (ar), and Chinese (zh):
|Language| en | es | de | ar | zh |
|:------:|:--:|:---:|:--:|:--:|:--:|
| |84.8|79.4 |76.2|71.5|75 |
See the [GitHub repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
[Conneau et al. (2020)](https://arxiv.org/pdf/1911.02116.pdf) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7).
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
More information needed. See the [ipython notebook](https://github.com/facebookresearch/XLM/blob/main/generate-embeddings.ipynb) in the associated [GitHub repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) for examples. | {"language": ["multilingual", "en", "fr", "es", "de", "it", "pt", "nl", "sv", "pl", "ru", "ar", "tr", "zh", "ja", "ko", "hi", "vi"], "license": "cc-by-nc-4.0"} | FacebookAI/xlm-mlm-17-1280 | null | [
"transformers",
"pytorch",
"tf",
"xlm",
"fill-mask",
"multilingual",
"en",
"fr",
"es",
"de",
"it",
"pt",
"nl",
"sv",
"pl",
"ru",
"ar",
"tr",
"zh",
"ja",
"ko",
"hi",
"vi",
"arxiv:1901.07291",
"arxiv:1911.02116",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1901.07291",
"1911.02116",
"1910.09700"
] | [
"multilingual",
"en",
"fr",
"es",
"de",
"it",
"pt",
"nl",
"sv",
"pl",
"ru",
"ar",
"tr",
"zh",
"ja",
"ko",
"hi",
"vi"
] | TAGS
#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #fr #es #de #it #pt #nl #sv #pl #ru #ar #tr #zh #ja #ko #hi #vi #arxiv-1901.07291 #arxiv-1911.02116 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
| xlm-mlm-17-1280
===============
Table of Contents
=================
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training
5. Evaluation
6. Environmental Impact
7. Technical Specifications
8. Citation
9. Model Card Authors
10. How To Get Started With the Model
Model Details
=============
xlm-mlm-17-1280 is the XLM model, which was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau, trained on text in 17 languages. The model is a transformer pretrained using a masked language modeling (MLM) objective.
Model Description
-----------------
* Developed by: See associated paper and GitHub Repo
* Model type: Language model
* Language(s) (NLP): 17 languages, see GitHub Repo for full list.
* License: CC-BY-NC-4.0
* Related Models: xlm-mlm-17-1280
* Resources for more information:
+ Associated paper
+ GitHub Repo
+ Hugging Face Multilingual Models for Inference docs
Uses
====
Direct Use
----------
The model is a language model. The model can be used for masked language modeling.
Downstream Use
--------------
To learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs. Also see the associated paper.
Out-of-Scope Use
----------------
The model should not be used to intentionally create hostile or alienating environments for people.
Bias, Risks, and Limitations
============================
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
Recommendations
---------------
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Training
========
This model is the XLM model trained on text in 17 languages. The preprocessing included tokenization and byte-pair-encoding. See the GitHub repo and the associated paper for further details on the training data and training procedure.
Conneau et al. (2020) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7).
Evaluation
==========
Testing Data, Factors & Metrics
-------------------------------
The model developers evaluated the model on the XNLI cross-lingual classification task (see the XNLI data card for more details on XNLI) using the metric of test accuracy. See the GitHub Repo for further details on the testing data, factors and metrics.
Results
-------
For xlm-mlm-17-1280, the test accuracy on the XNLI cross-lingual classification task in English (en), Spanish (es), German (de), Arabic (ar), and Chinese (zh):
See the GitHub repo for further details.
Environmental Impact
====================
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
* Hardware Type: More information needed
* Hours used: More information needed
* Cloud Provider: More information needed
* Compute Region: More information needed
* Carbon Emitted: More information needed
Technical Specifications
========================
Conneau et al. (2020) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7).
BibTeX:
APA:
* Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
Model Card Authors
==================
This model card was written by the team at Hugging Face.
How to Get Started with the Model
=================================
More information needed. See the ipython notebook in the associated GitHub repo for examples.
| [] | [
"TAGS\n#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #fr #es #de #it #pt #nl #sv #pl #ru #ar #tr #zh #ja #ko #hi #vi #arxiv-1901.07291 #arxiv-1911.02116 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers |
# xlm-mlm-en-2048
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Citation](#citation)
8. [Model Card Authors](#model-card-authors)
9. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. It’s a transformer pretrained with either a causal language modeling (CLM) objective (next token prediction), a masked language modeling (MLM) objective (BERT-like), or
a Translation Language Modeling (TLM) object (extension of BERT’s MLM to multiple language inputs). This model is trained with a masked language modeling objective on English text.
## Model Description
- **Developed by:** Researchers affiliated with Facebook AI, see [associated paper](https://arxiv.org/abs/1901.07291) and [GitHub Repo](https://github.com/facebookresearch/XLM)
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** CC-BY-NC-4.0
- **Related Models:** Other [XLM models](https://huggingface.co/models?sort=downloads&search=xlm)
- **Resources for more information:**
- [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau (2019)
- [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/pdf/1911.02116.pdf) by Conneau et al. (2020)
- [GitHub Repo](https://github.com/facebookresearch/XLM)
- [Hugging Face XLM docs](https://huggingface.co/docs/transformers/model_doc/xlm)
# Uses
## Direct Use
The model is a language model. The model can be used for masked language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face [fill mask docs](https://huggingface.co/tasks/fill-mask) and the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs. Also see the [associated paper](https://arxiv.org/abs/1901.07291).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
More information needed. See the [associated GitHub Repo](https://github.com/facebookresearch/XLM).
# Evaluation
More information needed. See the [associated GitHub Repo](https://github.com/facebookresearch/XLM).
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model. See the [Hugging Face XLM docs](https://huggingface.co/docs/transformers/model_doc/xlm) for more examples.
```python
from transformers import XLMTokenizer, XLMModel
import torch
tokenizer = XLMTokenizer.from_pretrained("xlm-mlm-en-2048")
model = XLMModel.from_pretrained("xlm-mlm-en-2048")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
<a href="https://huggingface.co/exbert/?model=xlm-mlm-en-2048">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "cc-by-nc-4.0", "tags": ["exbert"]} | FacebookAI/xlm-mlm-en-2048 | null | [
"transformers",
"pytorch",
"tf",
"xlm",
"fill-mask",
"exbert",
"en",
"arxiv:1901.07291",
"arxiv:1911.02116",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1901.07291",
"1911.02116",
"1910.09700"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #xlm #fill-mask #exbert #en #arxiv-1901.07291 #arxiv-1911.02116 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# xlm-mlm-en-2048
# Table of Contents
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training
5. Evaluation
6. Environmental Impact
7. Citation
8. Model Card Authors
9. How To Get Started With the Model
# Model Details
The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau. It’s a transformer pretrained with either a causal language modeling (CLM) objective (next token prediction), a masked language modeling (MLM) objective (BERT-like), or
a Translation Language Modeling (TLM) object (extension of BERT’s MLM to multiple language inputs). This model is trained with a masked language modeling objective on English text.
## Model Description
- Developed by: Researchers affiliated with Facebook AI, see associated paper and GitHub Repo
- Model type: Language model
- Language(s) (NLP): English
- License: CC-BY-NC-4.0
- Related Models: Other XLM models
- Resources for more information:
- Cross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau (2019)
- Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. (2020)
- GitHub Repo
- Hugging Face XLM docs
# Uses
## Direct Use
The model is a language model. The model can be used for masked language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs. Also see the associated paper.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
More information needed. See the associated GitHub Repo.
# Evaluation
More information needed. See the associated GitHub Repo.
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: More information needed
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
BibTeX:
APA:
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model. See the Hugging Face XLM docs for more examples.
<a href="URL
<img width="300px" src="URL
</a>
| [
"# xlm-mlm-en-2048",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Citation\n8. Model Card Authors\n9. How To Get Started With the Model",
"# Model Details\n\nThe XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau. It’s a transformer pretrained with either a causal language modeling (CLM) objective (next token prediction), a masked language modeling (MLM) objective (BERT-like), or\na Translation Language Modeling (TLM) object (extension of BERT’s MLM to multiple language inputs). This model is trained with a masked language modeling objective on English text.",
"## Model Description \n\n- Developed by: Researchers affiliated with Facebook AI, see associated paper and GitHub Repo\n- Model type: Language model\n- Language(s) (NLP): English\n- License: CC-BY-NC-4.0\n- Related Models: Other XLM models\n- Resources for more information: \n - Cross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau (2019)\n - Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. (2020)\n - GitHub Repo\n - Hugging Face XLM docs",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for masked language modeling.",
"## Downstream Use\n\nTo learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs. Also see the associated paper.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training \n\nMore information needed. See the associated GitHub Repo.",
"# Evaluation \n\nMore information needed. See the associated GitHub Repo.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed\n\nBibTeX:\n\n\n\nAPA:\n- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.",
"# Model Card Authors \n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model. See the Hugging Face XLM docs for more examples.\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #tf #xlm #fill-mask #exbert #en #arxiv-1901.07291 #arxiv-1911.02116 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# xlm-mlm-en-2048",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Citation\n8. Model Card Authors\n9. How To Get Started With the Model",
"# Model Details\n\nThe XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau. It’s a transformer pretrained with either a causal language modeling (CLM) objective (next token prediction), a masked language modeling (MLM) objective (BERT-like), or\na Translation Language Modeling (TLM) object (extension of BERT’s MLM to multiple language inputs). This model is trained with a masked language modeling objective on English text.",
"## Model Description \n\n- Developed by: Researchers affiliated with Facebook AI, see associated paper and GitHub Repo\n- Model type: Language model\n- Language(s) (NLP): English\n- License: CC-BY-NC-4.0\n- Related Models: Other XLM models\n- Resources for more information: \n - Cross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau (2019)\n - Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. (2020)\n - GitHub Repo\n - Hugging Face XLM docs",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for masked language modeling.",
"## Downstream Use\n\nTo learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs. Also see the associated paper.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training \n\nMore information needed. See the associated GitHub Repo.",
"# Evaluation \n\nMore information needed. See the associated GitHub Repo.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed\n\nBibTeX:\n\n\n\nAPA:\n- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.",
"# Model Card Authors \n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model. See the Hugging Face XLM docs for more examples.\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
fill-mask | transformers |
# xlm-mlm-ende-1024
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample, Alexis Conneau. xlm-mlm-ende-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-German. This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details.
## Model Description
- **Developed by:** Guillaume Lample, Alexis Conneau, see [associated paper](https://arxiv.org/abs/1901.07291)
- **Model type:** Language model
- **Language(s) (NLP):** English-German
- **License:** CC-BY-NC-4.0
- **Related Models:** [xlm-clm-enfr-1024](https://huggingface.co/xlm-clm-enfr-1024), [xlm-clm-ende-1024](https://huggingface.co/xlm-clm-ende-1024), [xlm-mlm-enfr-1024](https://huggingface.co/xlm-mlm-enfr-1024), [xlm-mlm-enro-1024](https://huggingface.co/xlm-mlm-enro-1024)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo](https://github.com/facebookresearch/XLM)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for masked language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face [fill mask docs](https://huggingface.co/tasks/fill-mask) and the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
The model developers write:
> In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for links, citations, and further details on the training data and training procedure.
The model developers also write that:
> If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data.
See the associated [GitHub Repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
# Evaluation
## Testing Data, Factors & Metrics
The model developers evaluated the model on the [WMT'16 English-German](https://huggingface.co/datasets/wmt16) dataset using the [BLEU metric](https://huggingface.co/spaces/evaluate-metric/bleu). See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details on the testing data, factors and metrics.
## Results
For xlm-mlm-ende-1024 results, see Table 1 and Table 2 of the [associated paper](https://arxiv.org/pdf/1901.07291.pdf).
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
The model developers write:
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details.
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
More information needed. This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details. | {"language": ["multilingual", "en", "de"], "license": "cc-by-nc-4.0"} | FacebookAI/xlm-mlm-ende-1024 | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm",
"fill-mask",
"multilingual",
"en",
"de",
"arxiv:1901.07291",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1901.07291",
"1910.09700"
] | [
"multilingual",
"en",
"de"
] | TAGS
#transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #de #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# xlm-mlm-ende-1024
# Table of Contents
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training
5. Evaluation
6. Environmental Impact
7. Technical Specifications
8. Citation
9. Model Card Authors
10. How To Get Started With the Model
# Model Details
The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-ende-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-German. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details.
## Model Description
- Developed by: Guillaume Lample, Alexis Conneau, see associated paper
- Model type: Language model
- Language(s) (NLP): English-German
- License: CC-BY-NC-4.0
- Related Models: xlm-clm-enfr-1024, xlm-clm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-enro-1024
- Resources for more information:
- Associated paper
- GitHub Repo
- Hugging Face Multilingual Models for Inference docs
# Uses
## Direct Use
The model is a language model. The model can be used for masked language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
The model developers write:
> In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.
See the associated paper for links, citations, and further details on the training data and training procedure.
The model developers also write that:
> If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data.
See the associated GitHub Repo for further details.
# Evaluation
## Testing Data, Factors & Metrics
The model developers evaluated the model on the WMT'16 English-German dataset using the BLEU metric. See the associated paper for further details on the testing data, factors and metrics.
## Results
For xlm-mlm-ende-1024 results, see Table 1 and Table 2 of the associated paper.
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: More information needed
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications
The model developers write:
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
See the associated paper for further details.
BibTeX:
APA:
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
More information needed. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. | [
"# xlm-mlm-ende-1024",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details\n\nThe XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-ende-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-German. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details.",
"## Model Description\n\n- Developed by: Guillaume Lample, Alexis Conneau, see associated paper\n- Model type: Language model\n- Language(s) (NLP): English-German\n- License: CC-BY-NC-4.0\n- Related Models: xlm-clm-enfr-1024, xlm-clm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-enro-1024\n- Resources for more information: \n - Associated paper\n - GitHub Repo\n - Hugging Face Multilingual Models for Inference docs",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for masked language modeling.",
"## Downstream Use\n\nTo learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training\n\nThe model developers write: \n\n> In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.\n\nSee the associated paper for links, citations, and further details on the training data and training procedure.\n\nThe model developers also write that: \n\n> If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data.\n\nSee the associated GitHub Repo for further details.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nThe model developers evaluated the model on the WMT'16 English-German dataset using the BLEU metric. See the associated paper for further details on the testing data, factors and metrics.",
"## Results \n\nFor xlm-mlm-ende-1024 results, see Table 1 and Table 2 of the associated paper.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nThe model developers write: \n\n> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.\n\nSee the associated paper for further details.\n\nBibTeX:\n\n\n\nAPA:\n- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.",
"# Model Card Authors \n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nMore information needed. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details."
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #de #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# xlm-mlm-ende-1024",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details\n\nThe XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-ende-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-German. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details.",
"## Model Description\n\n- Developed by: Guillaume Lample, Alexis Conneau, see associated paper\n- Model type: Language model\n- Language(s) (NLP): English-German\n- License: CC-BY-NC-4.0\n- Related Models: xlm-clm-enfr-1024, xlm-clm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-enro-1024\n- Resources for more information: \n - Associated paper\n - GitHub Repo\n - Hugging Face Multilingual Models for Inference docs",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for masked language modeling.",
"## Downstream Use\n\nTo learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training\n\nThe model developers write: \n\n> In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.\n\nSee the associated paper for links, citations, and further details on the training data and training procedure.\n\nThe model developers also write that: \n\n> If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data.\n\nSee the associated GitHub Repo for further details.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nThe model developers evaluated the model on the WMT'16 English-German dataset using the BLEU metric. See the associated paper for further details on the testing data, factors and metrics.",
"## Results \n\nFor xlm-mlm-ende-1024 results, see Table 1 and Table 2 of the associated paper.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nThe model developers write: \n\n> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.\n\nSee the associated paper for further details.\n\nBibTeX:\n\n\n\nAPA:\n- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.",
"# Model Card Authors \n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nMore information needed. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details."
] |
fill-mask | transformers |
# xlm-mlm-enfr-1024
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample, Alexis Conneau. xlm-mlm-enfr-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-French. This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details.
## Model Description
- **Developed by:** Guillaume Lample, Alexis Conneau, see [associated paper](https://arxiv.org/abs/1901.07291)
- **Model type:** Language model
- **Language(s) (NLP):** English-French
- **License:** CC-BY-NC-4.0
- **Related Models:** [xlm-clm-ende-1024](https://huggingface.co/xlm-clm-enfr-1024), [xlm-clm-ende-1024](https://huggingface.co/xlm-clm-ende-1024), [xlm-mlm-ende-1024](https://huggingface.co/xlm-mlm-ende-1024), [xlm-mlm-enro-1024](https://huggingface.co/xlm-mlm-enro-1024)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo](https://github.com/facebookresearch/XLM)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for masked language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face [fill mask docs](https://huggingface.co/tasks/fill-mask) and the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
The model developers write:
> In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for links, citations, and further details on the training data and training procedure.
The model developers also write that:
> If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data.
See the associated [GitHub Repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
# Evaluation
## Testing Data, Factors & Metrics
The model developers evaluated the model on the [WMT'14 English-French](https://huggingface.co/datasets/wmt14) dataset using the [BLEU metric](https://huggingface.co/spaces/evaluate-metric/bleu). See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details on the testing data, factors and metrics.
## Results
For xlm-mlm-enfr-1024 results, see Table 1 and Table 2 of the [associated paper](https://arxiv.org/pdf/1901.07291.pdf).
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
The model developers write:
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details.
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
More information needed. This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details. | {"language": ["multilingual", "en", "fr"], "license": "cc-by-nc-4.0"} | FacebookAI/xlm-mlm-enfr-1024 | null | [
"transformers",
"pytorch",
"tf",
"xlm",
"fill-mask",
"multilingual",
"en",
"fr",
"arxiv:1901.07291",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1901.07291",
"1910.09700"
] | [
"multilingual",
"en",
"fr"
] | TAGS
#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #fr #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# xlm-mlm-enfr-1024
# Table of Contents
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training
5. Evaluation
6. Environmental Impact
7. Technical Specifications
8. Citation
9. Model Card Authors
10. How To Get Started With the Model
# Model Details
The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-enfr-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-French. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details.
## Model Description
- Developed by: Guillaume Lample, Alexis Conneau, see associated paper
- Model type: Language model
- Language(s) (NLP): English-French
- License: CC-BY-NC-4.0
- Related Models: xlm-clm-ende-1024, xlm-clm-ende-1024, xlm-mlm-ende-1024, xlm-mlm-enro-1024
- Resources for more information:
- Associated paper
- GitHub Repo
- Hugging Face Multilingual Models for Inference docs
# Uses
## Direct Use
The model is a language model. The model can be used for masked language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
The model developers write:
> In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.
See the associated paper for links, citations, and further details on the training data and training procedure.
The model developers also write that:
> If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data.
See the associated GitHub Repo for further details.
# Evaluation
## Testing Data, Factors & Metrics
The model developers evaluated the model on the WMT'14 English-French dataset using the BLEU metric. See the associated paper for further details on the testing data, factors and metrics.
## Results
For xlm-mlm-enfr-1024 results, see Table 1 and Table 2 of the associated paper.
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: More information needed
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications
The model developers write:
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
See the associated paper for further details.
BibTeX:
APA:
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
More information needed. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. | [
"# xlm-mlm-enfr-1024",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details\n\nThe XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-enfr-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-French. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details.",
"## Model Description\n\n- Developed by: Guillaume Lample, Alexis Conneau, see associated paper\n- Model type: Language model\n- Language(s) (NLP): English-French\n- License: CC-BY-NC-4.0\n- Related Models: xlm-clm-ende-1024, xlm-clm-ende-1024, xlm-mlm-ende-1024, xlm-mlm-enro-1024\n- Resources for more information: \n - Associated paper\n - GitHub Repo\n - Hugging Face Multilingual Models for Inference docs",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for masked language modeling.",
"## Downstream Use\n\nTo learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training\n\nThe model developers write: \n\n> In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.\n\nSee the associated paper for links, citations, and further details on the training data and training procedure.\n\nThe model developers also write that: \n\n> If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data.\n\nSee the associated GitHub Repo for further details.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nThe model developers evaluated the model on the WMT'14 English-French dataset using the BLEU metric. See the associated paper for further details on the testing data, factors and metrics.",
"## Results \n\nFor xlm-mlm-enfr-1024 results, see Table 1 and Table 2 of the associated paper.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nThe model developers write: \n\n> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.\n\nSee the associated paper for further details.\n\nBibTeX:\n\n\n\nAPA:\n- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.",
"# Model Card Authors \n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nMore information needed. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details."
] | [
"TAGS\n#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #fr #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# xlm-mlm-enfr-1024",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details\n\nThe XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-enfr-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-French. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details.",
"## Model Description\n\n- Developed by: Guillaume Lample, Alexis Conneau, see associated paper\n- Model type: Language model\n- Language(s) (NLP): English-French\n- License: CC-BY-NC-4.0\n- Related Models: xlm-clm-ende-1024, xlm-clm-ende-1024, xlm-mlm-ende-1024, xlm-mlm-enro-1024\n- Resources for more information: \n - Associated paper\n - GitHub Repo\n - Hugging Face Multilingual Models for Inference docs",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for masked language modeling.",
"## Downstream Use\n\nTo learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training\n\nThe model developers write: \n\n> In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.\n\nSee the associated paper for links, citations, and further details on the training data and training procedure.\n\nThe model developers also write that: \n\n> If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data.\n\nSee the associated GitHub Repo for further details.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nThe model developers evaluated the model on the WMT'14 English-French dataset using the BLEU metric. See the associated paper for further details on the testing data, factors and metrics.",
"## Results \n\nFor xlm-mlm-enfr-1024 results, see Table 1 and Table 2 of the associated paper.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nThe model developers write: \n\n> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.\n\nSee the associated paper for further details.\n\nBibTeX:\n\n\n\nAPA:\n- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.",
"# Model Card Authors \n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nMore information needed. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details."
] |
fill-mask | transformers |
# xlm-mlm-enro-1024
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample, Alexis Conneau. xlm-mlm-enro-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-Romanian. This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details.
## Model Description
- **Developed by:** Guillaume Lample, Alexis Conneau, see [associated paper](https://arxiv.org/abs/1901.07291)
- **Model type:** Language model
- **Language(s) (NLP):** English-Romanian
- **License:** license: cc-by-nc-4.0
- **Related Models:** [xlm-clm-enfr-1024](https://huggingface.co/xlm-clm-enfr-1024), [xlm-clm-ende-1024](https://huggingface.co/xlm-clm-ende-1024), [xlm-mlm-enfr-1024](https://huggingface.co/xlm-mlm-enfr-1024), [xlm-mlm-ende-1024](https://huggingface.co/xlm-mlm-ende-1024)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo](https://github.com/facebookresearch/XLM)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for masked language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face [fill mask docs](https://huggingface.co/tasks/fill-mask) and the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
The model developers write:
> In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for links, citations, and further details on the training data and training procedure.
The model developers also write that:
> If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data.
See the associated [GitHub Repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
# Evaluation
## Testing Data, Factors & Metrics
The model developers evaluated the model on the [WMT'16 English-Romanian](https://huggingface.co/datasets/wmt16) dataset using the [BLEU metric](https://huggingface.co/spaces/evaluate-metric/bleu). See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details on the testing data, factors and metrics.
## Results
For xlm-mlm-enro-1024 results, see Tables 1-3 of the [associated paper](https://arxiv.org/pdf/1901.07291.pdf).
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
The model developers write:
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details.
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
More information needed. This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details. | {"language": ["multilingual", "en", "ro"], "license": "cc-by-nc-4.0"} | FacebookAI/xlm-mlm-enro-1024 | null | [
"transformers",
"pytorch",
"tf",
"xlm",
"fill-mask",
"multilingual",
"en",
"ro",
"arxiv:1901.07291",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1901.07291",
"1910.09700"
] | [
"multilingual",
"en",
"ro"
] | TAGS
#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #ro #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# xlm-mlm-enro-1024
# Table of Contents
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training
5. Evaluation
6. Environmental Impact
7. Technical Specifications
8. Citation
9. Model Card Authors
10. How To Get Started With the Model
# Model Details
The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-enro-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-Romanian. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details.
## Model Description
- Developed by: Guillaume Lample, Alexis Conneau, see associated paper
- Model type: Language model
- Language(s) (NLP): English-Romanian
- License: license: cc-by-nc-4.0
- Related Models: xlm-clm-enfr-1024, xlm-clm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-ende-1024
- Resources for more information:
- Associated paper
- GitHub Repo
- Hugging Face Multilingual Models for Inference docs
# Uses
## Direct Use
The model is a language model. The model can be used for masked language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
The model developers write:
> In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.
See the associated paper for links, citations, and further details on the training data and training procedure.
The model developers also write that:
> If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data.
See the associated GitHub Repo for further details.
# Evaluation
## Testing Data, Factors & Metrics
The model developers evaluated the model on the WMT'16 English-Romanian dataset using the BLEU metric. See the associated paper for further details on the testing data, factors and metrics.
## Results
For xlm-mlm-enro-1024 results, see Tables 1-3 of the associated paper.
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: More information needed
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications
The model developers write:
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
See the associated paper for further details.
BibTeX:
APA:
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
More information needed. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. | [
"# xlm-mlm-enro-1024",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details\n\nThe XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-enro-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-Romanian. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details.",
"## Model Description\n\n- Developed by: Guillaume Lample, Alexis Conneau, see associated paper\n- Model type: Language model\n- Language(s) (NLP): English-Romanian\n- License: license: cc-by-nc-4.0\n- Related Models: xlm-clm-enfr-1024, xlm-clm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-ende-1024\n- Resources for more information: \n - Associated paper\n - GitHub Repo\n - Hugging Face Multilingual Models for Inference docs",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for masked language modeling.",
"## Downstream Use\n\nTo learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training\n\nThe model developers write: \n\n> In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.\n\nSee the associated paper for links, citations, and further details on the training data and training procedure.\n\nThe model developers also write that: \n\n> If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data.\n\nSee the associated GitHub Repo for further details.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nThe model developers evaluated the model on the WMT'16 English-Romanian dataset using the BLEU metric. See the associated paper for further details on the testing data, factors and metrics.",
"## Results \n\nFor xlm-mlm-enro-1024 results, see Tables 1-3 of the associated paper.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nThe model developers write: \n\n> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.\n\nSee the associated paper for further details.\n\nBibTeX:\n\n\n\nAPA:\n- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.",
"# Model Card Authors \n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nMore information needed. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details."
] | [
"TAGS\n#transformers #pytorch #tf #xlm #fill-mask #multilingual #en #ro #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# xlm-mlm-enro-1024",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details\n\nThe XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-enro-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-Romanian. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details.",
"## Model Description\n\n- Developed by: Guillaume Lample, Alexis Conneau, see associated paper\n- Model type: Language model\n- Language(s) (NLP): English-Romanian\n- License: license: cc-by-nc-4.0\n- Related Models: xlm-clm-enfr-1024, xlm-clm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-ende-1024\n- Resources for more information: \n - Associated paper\n - GitHub Repo\n - Hugging Face Multilingual Models for Inference docs",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for masked language modeling.",
"## Downstream Use\n\nTo learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training\n\nThe model developers write: \n\n> In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.\n\nSee the associated paper for links, citations, and further details on the training data and training procedure.\n\nThe model developers also write that: \n\n> If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data.\n\nSee the associated GitHub Repo for further details.",
"# Evaluation",
"## Testing Data, Factors & Metrics\n\nThe model developers evaluated the model on the WMT'16 English-Romanian dataset using the BLEU metric. See the associated paper for further details on the testing data, factors and metrics.",
"## Results \n\nFor xlm-mlm-enro-1024 results, see Tables 1-3 of the associated paper.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nThe model developers write: \n\n> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.\n\nSee the associated paper for further details.\n\nBibTeX:\n\n\n\nAPA:\n- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.",
"# Model Card Authors \n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nMore information needed. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details."
] |
fill-mask | transformers |
# xlm-mlm-tlm-xnli15-1024
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample, Alexis Conneau. xlm-mlm-tlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective in combination with a translation language modeling (TLM) objective and then fine-tuned on the English NLI dataset. The model developers evaluated the capacity of the model to make correct predictions in all 15 XNLI languages (see the [XNLI data card](https://huggingface.co/datasets/xnli) for further information on XNLI).
## Model Description
- **Developed by:** Guillaume Lample, Alexis Conneau, see [associated paper](https://arxiv.org/abs/1901.07291)
- **Model type:** Language model
- **Language(s) (NLP):** English; evaluated in 15 languages (see [XNLI data card](https://huggingface.co/datasets/xnli))
- **License:** CC-BY-NC-4.0
- **Related Models:** [XLM models](https://huggingface.co/models?sort=downloads&search=xlm)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo for XLM](https://github.com/facebookresearch/XLM)
- [GitHub Repo for XNLI](https://github.com/facebookresearch/XNLI)
- [XNLI data card](https://huggingface.co/datasets/xnli)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for cross-lingual text classification. Though the model is fine-tuned based on English text data, the model's ability to classify sentences in 14 other languages has been evaluated (see [Evaluation](#evaluation)).
## Downstream Use
This model can be used for downstream tasks related to natural language inference in different languages. For more information, see the [associated paper](https://arxiv.org/abs/1901.07291).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training Details
Training details are culled from the [associated paper](https://arxiv.org/pdf/1901.07291.pdf). See the paper for links, citations, and further details. Also see the associated [GitHub Repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
## Training Data
The model developers write:
> We use WikiExtractor2 to extract raw sentences from Wikipedia dumps and use them as mono-lingual data for the CLM and MLM objectives. For the TLM objective, we only use parallel data that involves English, similar to Conneau et al. (2018b).
> - Precisely, we use MultiUN (Ziemski et al., 2016) for French, Spanish, Russian, Arabic and Chinese, and the IIT Bombay corpus (Anoop et al., 2018) for Hindi.
> - We extract the following corpora from the OPUS 3 website Tiedemann (2012): the EUbookshop corpus for German, Greek and Bulgarian, OpenSubtitles 2018 for Turkish, Vietnamese and Thai, Tanzil for both Urdu and Swahili and GlobalVoices for Swahili.
> - For Chinese, Japanese and Thai we use the tokenizer of Chang et al. (2008), the Kytea4 tokenizer, and the PyThaiNLP5 tokenizer respectively.
> - For all other languages, we use the tokenizer provided by Moses (Koehn et al., 2007), falling back on the default English tokenizer when necessary.
For fine-tuning, the developers used the English NLI dataset (see the [XNLI data card](https://huggingface.co/datasets/xnli)).
## Training Procedure
### Preprocessing
The model developers write:
> We use fastBPE to learn BPE codes and split words into subword units. The BPE codes are learned on the concatenation of sentences sampled from all languages, following the method presented in Section 3.1.
### Speeds, Sizes, Times
The model developers write:
> We use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam optimizer (Kingma and Ba, 2014), a linear warm-up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.
>
> For the CLM and MLM objectives, we use streams of 256 tokens and a mini-batches of size 64. Unlike Devlin et al. (2018), a sequence in a mini-batch can contain more than two consecutive sentences, as explained in Section 3.2. For the TLM objective, we sample mini-batches of 4000 tokens composed of sentences with similar lengths. We use the averaged perplexity over languages as a stopping criterion for training. For machine translation, we only use 6 layers, and we create mini-batches of 2000 tokens.
>
> When fine-tuning on XNLI, we use mini-batches of size 8 or 16, and we clip the sentence length to 256 words. We use 80k BPE splits and a vocabulary of 95k and train a 12-layer model on the Wikipedias of the XNLI languages. We sample the learning rate of the Adam optimizer with values from 5.10−4 to 2.10−4, and use small evaluation epochs of 20000 random samples. We use the first hidden state of the last layer of the transformer as input to the randomly initialized final linear classifier, and fine-tune all parameters. In our experiments, using either max-pooling or mean-pooling over the last layer did not work bet- ter than using the first hidden state.
>
> We implement all our models in Py-Torch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
# Evaluation
## Testing Data, Factors & Metrics
After fine-tuning the model on the English NLI dataset, the model developers evaluated the capacity of the model to make correct predictions in the 15 XNLI languages using the XNLI data and the metric of test accuracy.See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details.
## Results
|Language| en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur |
|:------:|:--:|:---:|:--:|:--:|:--:|:--:|:---:|:--:|:--:|:--:|:--:|:---:|:--:|:--:|:--:|
|Accuracy|85.0|78.7 |78.9|77.8|76.6|77.4|75.3 |72.5|73.1|76.1|73.2|76.5 |69.6|68.4|67.3|
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 64 Volta GPUs
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
Details are culled from the [associated paper](https://arxiv.org/pdf/1901.07291.pdf). See the paper for links, citations, and further details. Also see the associated [GitHub Repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
## Model Architecture and Objective
xlm-mlm-tlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective in combination with a translation language modeling (TLM) objective and then fine-tuned on the English NLI dataset. About the TLM objective, the developers write:
> We introduce a new translation language modeling (TLM) objective for improving cross-lingual pretraining. Our TLM objective is an extension of MLM, where instead of considering monolingual text streams, we concatenate parallel sentences as illustrated in Figure 1. We randomly mask words in both the source and target sentences. To predict a word masked in an English sentence, the model can either attend to surrounding English words or to the French translation, encouraging the model to align the English and French representations.
## Compute Infrastructure
### Hardware and Software
The developers write:
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details. | {"language": ["multilingual", "en", "fr", "es", "de", "el", "bg", "ru", "tr", "ar", "vi", "th", "zh", "hi", "sw", "ur"], "license": "cc-by-nc-4.0"} | FacebookAI/xlm-mlm-tlm-xnli15-1024 | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm",
"fill-mask",
"multilingual",
"en",
"fr",
"es",
"de",
"el",
"bg",
"ru",
"tr",
"ar",
"vi",
"th",
"zh",
"hi",
"sw",
"ur",
"arxiv:1901.07291",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1901.07291",
"1910.09700"
] | [
"multilingual",
"en",
"fr",
"es",
"de",
"el",
"bg",
"ru",
"tr",
"ar",
"vi",
"th",
"zh",
"hi",
"sw",
"ur"
] | TAGS
#transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #fr #es #de #el #bg #ru #tr #ar #vi #th #zh #hi #sw #ur #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
| xlm-mlm-tlm-xnli15-1024
=======================
Table of Contents
=================
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training Details
5. Evaluation
6. Environmental Impact
7. Technical Specifications
8. Citation
9. Model Card Authors
10. How To Get Started With the Model
Model Details
=============
The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-tlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective in combination with a translation language modeling (TLM) objective and then fine-tuned on the English NLI dataset. The model developers evaluated the capacity of the model to make correct predictions in all 15 XNLI languages (see the XNLI data card for further information on XNLI).
Model Description
-----------------
* Developed by: Guillaume Lample, Alexis Conneau, see associated paper
* Model type: Language model
* Language(s) (NLP): English; evaluated in 15 languages (see XNLI data card)
* License: CC-BY-NC-4.0
* Related Models: XLM models
* Resources for more information:
+ Associated paper
+ GitHub Repo for XLM
+ GitHub Repo for XNLI
+ XNLI data card
+ Hugging Face Multilingual Models for Inference docs
Uses
====
Direct Use
----------
The model is a language model. The model can be used for cross-lingual text classification. Though the model is fine-tuned based on English text data, the model's ability to classify sentences in 14 other languages has been evaluated (see Evaluation).
Downstream Use
--------------
This model can be used for downstream tasks related to natural language inference in different languages. For more information, see the associated paper.
Out-of-Scope Use
----------------
The model should not be used to intentionally create hostile or alienating environments for people.
Bias, Risks, and Limitations
============================
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
Recommendations
---------------
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Training Details
================
Training details are culled from the associated paper. See the paper for links, citations, and further details. Also see the associated GitHub Repo for further details.
Training Data
-------------
The model developers write:
>
> We use WikiExtractor2 to extract raw sentences from Wikipedia dumps and use them as mono-lingual data for the CLM and MLM objectives. For the TLM objective, we only use parallel data that involves English, similar to Conneau et al. (2018b).
>
>
> * Precisely, we use MultiUN (Ziemski et al., 2016) for French, Spanish, Russian, Arabic and Chinese, and the IIT Bombay corpus (Anoop et al., 2018) for Hindi.
> * We extract the following corpora from the OPUS 3 website Tiedemann (2012): the EUbookshop corpus for German, Greek and Bulgarian, OpenSubtitles 2018 for Turkish, Vietnamese and Thai, Tanzil for both Urdu and Swahili and GlobalVoices for Swahili.
> * For Chinese, Japanese and Thai we use the tokenizer of Chang et al. (2008), the Kytea4 tokenizer, and the PyThaiNLP5 tokenizer respectively.
> * For all other languages, we use the tokenizer provided by Moses (Koehn et al., 2007), falling back on the default English tokenizer when necessary.
>
>
>
For fine-tuning, the developers used the English NLI dataset (see the XNLI data card).
Training Procedure
------------------
### Preprocessing
The model developers write:
>
> We use fastBPE to learn BPE codes and split words into subword units. The BPE codes are learned on the concatenation of sentences sampled from all languages, following the method presented in Section 3.1.
>
>
>
### Speeds, Sizes, Times
The model developers write:
>
> We use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam optimizer (Kingma and Ba, 2014), a linear warm-up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.
>
>
> For the CLM and MLM objectives, we use streams of 256 tokens and a mini-batches of size 64. Unlike Devlin et al. (2018), a sequence in a mini-batch can contain more than two consecutive sentences, as explained in Section 3.2. For the TLM objective, we sample mini-batches of 4000 tokens composed of sentences with similar lengths. We use the averaged perplexity over languages as a stopping criterion for training. For machine translation, we only use 6 layers, and we create mini-batches of 2000 tokens.
>
>
> When fine-tuning on XNLI, we use mini-batches of size 8 or 16, and we clip the sentence length to 256 words. We use 80k BPE splits and a vocabulary of 95k and train a 12-layer model on the Wikipedias of the XNLI languages. We sample the learning rate of the Adam optimizer with values from 5.10−4 to 2.10−4, and use small evaluation epochs of 20000 random samples. We use the first hidden state of the last layer of the transformer as input to the randomly initialized final linear classifier, and fine-tune all parameters. In our experiments, using either max-pooling or mean-pooling over the last layer did not work bet- ter than using the first hidden state.
>
>
> We implement all our models in Py-Torch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
>
>
>
Evaluation
==========
Testing Data, Factors & Metrics
-------------------------------
After fine-tuning the model on the English NLI dataset, the model developers evaluated the capacity of the model to make correct predictions in the 15 XNLI languages using the XNLI data and the metric of test accuracy.See the associated paper for further details.
Results
-------
Environmental Impact
====================
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
* Hardware Type: 64 Volta GPUs
* Hours used: More information needed
* Cloud Provider: More information needed
* Compute Region: More information needed
* Carbon Emitted: More information needed
Technical Specifications
========================
Details are culled from the associated paper. See the paper for links, citations, and further details. Also see the associated GitHub Repo for further details.
Model Architecture and Objective
--------------------------------
xlm-mlm-tlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective in combination with a translation language modeling (TLM) objective and then fine-tuned on the English NLI dataset. About the TLM objective, the developers write:
>
> We introduce a new translation language modeling (TLM) objective for improving cross-lingual pretraining. Our TLM objective is an extension of MLM, where instead of considering monolingual text streams, we concatenate parallel sentences as illustrated in Figure 1. We randomly mask words in both the source and target sentences. To predict a word masked in an English sentence, the model can either attend to surrounding English words or to the French translation, encouraging the model to align the English and French representations.
>
>
>
Compute Infrastructure
----------------------
### Hardware and Software
The developers write:
>
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
>
>
>
BibTeX:
APA:
* Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
Model Card Authors
==================
This model card was written by the team at Hugging Face.
How to Get Started with the Model
=================================
This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details.
| [
"### Preprocessing\n\n\nThe model developers write:\n\n\n\n> \n> We use fastBPE to learn BPE codes and split words into subword units. The BPE codes are learned on the concatenation of sentences sampled from all languages, following the method presented in Section 3.1.\n> \n> \n>",
"### Speeds, Sizes, Times\n\n\nThe model developers write:\n\n\n\n> \n> We use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam optimizer (Kingma and Ba, 2014), a linear warm-up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.\n> \n> \n> For the CLM and MLM objectives, we use streams of 256 tokens and a mini-batches of size 64. Unlike Devlin et al. (2018), a sequence in a mini-batch can contain more than two consecutive sentences, as explained in Section 3.2. For the TLM objective, we sample mini-batches of 4000 tokens composed of sentences with similar lengths. We use the averaged perplexity over languages as a stopping criterion for training. For machine translation, we only use 6 layers, and we create mini-batches of 2000 tokens.\n> \n> \n> When fine-tuning on XNLI, we use mini-batches of size 8 or 16, and we clip the sentence length to 256 words. We use 80k BPE splits and a vocabulary of 95k and train a 12-layer model on the Wikipedias of the XNLI languages. We sample the learning rate of the Adam optimizer with values from 5.10−4 to 2.10−4, and use small evaluation epochs of 20000 random samples. We use the first hidden state of the last layer of the transformer as input to the randomly initialized final linear classifier, and fine-tune all parameters. In our experiments, using either max-pooling or mean-pooling over the last layer did not work bet- ter than using the first hidden state.\n> \n> \n> We implement all our models in Py-Torch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.\n> \n> \n> \n\n\nEvaluation\n==========\n\n\nTesting Data, Factors & Metrics\n-------------------------------\n\n\nAfter fine-tuning the model on the English NLI dataset, the model developers evaluated the capacity of the model to make correct predictions in the 15 XNLI languages using the XNLI data and the metric of test accuracy.See the associated paper for further details.\n\n\nResults\n-------\n\n\n\nEnvironmental Impact\n====================\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: 64 Volta GPUs\n* Hours used: More information needed\n* Cloud Provider: More information needed\n* Compute Region: More information needed\n* Carbon Emitted: More information needed\n\n\nTechnical Specifications\n========================\n\n\nDetails are culled from the associated paper. See the paper for links, citations, and further details. Also see the associated GitHub Repo for further details.\n\n\nModel Architecture and Objective\n--------------------------------\n\n\nxlm-mlm-tlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective in combination with a translation language modeling (TLM) objective and then fine-tuned on the English NLI dataset. About the TLM objective, the developers write:\n\n\n\n> \n> We introduce a new translation language modeling (TLM) objective for improving cross-lingual pretraining. Our TLM objective is an extension of MLM, where instead of considering monolingual text streams, we concatenate parallel sentences as illustrated in Figure 1. We randomly mask words in both the source and target sentences. To predict a word masked in an English sentence, the model can either attend to surrounding English words or to the French translation, encouraging the model to align the English and French representations.\n> \n> \n> \n\n\nCompute Infrastructure\n----------------------",
"### Hardware and Software\n\n\nThe developers write:\n\n\n\n> \n> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.\n> \n> \n> \n\n\nBibTeX:\n\n\nAPA:\n\n\n* Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.\n\n\nModel Card Authors\n==================\n\n\nThis model card was written by the team at Hugging Face.\n\n\nHow to Get Started with the Model\n=================================\n\n\nThis model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details."
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #fr #es #de #el #bg #ru #tr #ar #vi #th #zh #hi #sw #ur #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Preprocessing\n\n\nThe model developers write:\n\n\n\n> \n> We use fastBPE to learn BPE codes and split words into subword units. The BPE codes are learned on the concatenation of sentences sampled from all languages, following the method presented in Section 3.1.\n> \n> \n>",
"### Speeds, Sizes, Times\n\n\nThe model developers write:\n\n\n\n> \n> We use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam optimizer (Kingma and Ba, 2014), a linear warm-up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.\n> \n> \n> For the CLM and MLM objectives, we use streams of 256 tokens and a mini-batches of size 64. Unlike Devlin et al. (2018), a sequence in a mini-batch can contain more than two consecutive sentences, as explained in Section 3.2. For the TLM objective, we sample mini-batches of 4000 tokens composed of sentences with similar lengths. We use the averaged perplexity over languages as a stopping criterion for training. For machine translation, we only use 6 layers, and we create mini-batches of 2000 tokens.\n> \n> \n> When fine-tuning on XNLI, we use mini-batches of size 8 or 16, and we clip the sentence length to 256 words. We use 80k BPE splits and a vocabulary of 95k and train a 12-layer model on the Wikipedias of the XNLI languages. We sample the learning rate of the Adam optimizer with values from 5.10−4 to 2.10−4, and use small evaluation epochs of 20000 random samples. We use the first hidden state of the last layer of the transformer as input to the randomly initialized final linear classifier, and fine-tune all parameters. In our experiments, using either max-pooling or mean-pooling over the last layer did not work bet- ter than using the first hidden state.\n> \n> \n> We implement all our models in Py-Torch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.\n> \n> \n> \n\n\nEvaluation\n==========\n\n\nTesting Data, Factors & Metrics\n-------------------------------\n\n\nAfter fine-tuning the model on the English NLI dataset, the model developers evaluated the capacity of the model to make correct predictions in the 15 XNLI languages using the XNLI data and the metric of test accuracy.See the associated paper for further details.\n\n\nResults\n-------\n\n\n\nEnvironmental Impact\n====================\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: 64 Volta GPUs\n* Hours used: More information needed\n* Cloud Provider: More information needed\n* Compute Region: More information needed\n* Carbon Emitted: More information needed\n\n\nTechnical Specifications\n========================\n\n\nDetails are culled from the associated paper. See the paper for links, citations, and further details. Also see the associated GitHub Repo for further details.\n\n\nModel Architecture and Objective\n--------------------------------\n\n\nxlm-mlm-tlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective in combination with a translation language modeling (TLM) objective and then fine-tuned on the English NLI dataset. About the TLM objective, the developers write:\n\n\n\n> \n> We introduce a new translation language modeling (TLM) objective for improving cross-lingual pretraining. Our TLM objective is an extension of MLM, where instead of considering monolingual text streams, we concatenate parallel sentences as illustrated in Figure 1. We randomly mask words in both the source and target sentences. To predict a word masked in an English sentence, the model can either attend to surrounding English words or to the French translation, encouraging the model to align the English and French representations.\n> \n> \n> \n\n\nCompute Infrastructure\n----------------------",
"### Hardware and Software\n\n\nThe developers write:\n\n\n\n> \n> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.\n> \n> \n> \n\n\nBibTeX:\n\n\nAPA:\n\n\n* Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.\n\n\nModel Card Authors\n==================\n\n\nThis model card was written by the team at Hugging Face.\n\n\nHow to Get Started with the Model\n=================================\n\n\nThis model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details."
] |
fill-mask | transformers |
# xlm-mlm-xnli15-1024
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample, Alexis Conneau. xlm-mlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective fine-tuned on the English NLI dataset. The model developers evaluated the capacity of the model to make correct predictions in all 15 XNLI languages (see the [XNLI data card](https://huggingface.co/datasets/xnli) for further information on XNLI).
## Model Description
- **Developed by:** Guillaume Lample, Alexis Conneau, see [associated paper](https://arxiv.org/abs/1901.07291)
- **Model type:** Language model
- **Language(s) (NLP):** English; evaluated in 15 languages (see [XNLI data card](https://huggingface.co/datasets/xnli))
- **License:** CC-BY-NC-4.0
- **Related Models:** [XLM models](https://huggingface.co/models?sort=downloads&search=xlm)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo for XLM](https://github.com/facebookresearch/XLM)
- [GitHub Repo for XNLI](https://github.com/facebookresearch/XNLI)
- [XNLI data card](https://huggingface.co/datasets/xnli)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for cross-lingual text classification. Though the model is fine-tuned based on English text data, the model's ability to classify sentences in 14 other languages has been evaluated (see [Evaluation](#evaluation)).
## Downstream Use
This model can be used for downstream tasks related to natural language inference in different languages. For more information, see the [associated paper](https://arxiv.org/abs/1901.07291).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training Details
Training details are culled from the [associated paper](https://arxiv.org/pdf/1901.07291.pdf). See the paper for links, citations, and further details. Also see the associated [GitHub Repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
## Training Data
The model developers write:
> We use WikiExtractor2 to extract raw sentences from Wikipedia dumps and use them as mono-lingual data for the CLM and MLM objectives. For the TLM objective, we only use parallel data that involves English, similar to Conneau et al. (2018b).
> - Precisely, we use MultiUN (Ziemski et al., 2016) for French, Spanish, Russian, Arabic and Chinese, and the IIT Bombay corpus (Anoop et al., 2018) for Hindi.
> - We extract the following corpora from the OPUS 3 website Tiedemann (2012): the EUbookshop corpus for German, Greek and Bulgarian, OpenSubtitles 2018 for Turkish, Vietnamese and Thai, Tanzil for both Urdu and Swahili and GlobalVoices for Swahili.
> - For Chinese, Japanese and Thai we use the tokenizer of Chang et al. (2008), the Kytea4 tokenizer, and the PyThaiNLP5 tokenizer respectively.
> - For all other languages, we use the tokenizer provided by Moses (Koehn et al., 2007), falling back on the default English tokenizer when necessary.
For fine-tuning, the developers used the English NLI dataset (see the [XNLI data card](https://huggingface.co/datasets/xnli)).
## Training Procedure
### Preprocessing
The model developers write:
> We use fastBPE to learn BPE codes and split words into subword units. The BPE codes are learned on the concatenation of sentences sampled from all languages, following the method presented in Section 3.1.
### Speeds, Sizes, Times
The model developers write:
> We use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam optimizer (Kingma and Ba, 2014), a linear warm-up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.
>
> For the CLM and MLM objectives, we use streams of 256 tokens and a mini-batches of size 64. Unlike Devlin et al. (2018), a sequence in a mini-batch can contain more than two consecutive sentences, as explained in Section 3.2. For the TLM objective, we sample mini-batches of 4000 tokens composed of sentences with similar lengths. We use the averaged perplexity over languages as a stopping criterion for training. For machine translation, we only use 6 layers, and we create mini-batches of 2000 tokens.
>
> When fine-tuning on XNLI, we use mini-batches of size 8 or 16, and we clip the sentence length to 256 words. We use 80k BPE splits and a vocabulary of 95k and train a 12-layer model on the Wikipedias of the XNLI languages. We sample the learning rate of the Adam optimizer with values from 5.10−4 to 2.10−4, and use small evaluation epochs of 20000 random samples. We use the first hidden state of the last layer of the transformer as input to the randomly initialized final linear classifier, and fine-tune all parameters. In our experiments, using either max-pooling or mean-pooling over the last layer did not work bet- ter than using the first hidden state.
>
> We implement all our models in Py-Torch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
# Evaluation
## Testing Data, Factors & Metrics
After fine-tuning the model on the English NLI dataset, the model developers evaluated the capacity of the model to make correct predictions in the 15 XNLI languages using the XNLI data and the metric of test accuracy.See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details.
## Results
|Language| en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur |
|:------:|:--:|:---:|:--:|:--:|:--:|:--:|:---:|:--:|:--:|:--:|:--:|:---:|:--:|:--:|:--:|
|Accuracy|83.2|76.5 |76.3|74.2|73.1|74.0|73.1 |67.8|68.5|71.2|69.2|71.9 |65.7|64.6|63.4|
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 64 Volta GPUs
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
Details are culled from the [associated paper](https://arxiv.org/pdf/1901.07291.pdf). See the paper for links, citations, and further details. Also see the associated [GitHub Repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
## Model Architecture and Objective
xlm-mlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective fine-tuned on the English NLI dataset. About the MLM objective, the developers write:
> We also consider the masked language model- ing (MLM) objective of Devlin et al. (2018), also known as the Cloze task (Taylor, 1953). Follow- ing Devlin et al. (2018), we sample randomly 15% of the BPE tokens from the text streams, replace them by a [MASK] token 80% of the time, by a random token 10% of the time, and we keep them unchanged 10% of the time. Differences be- tween our approach and the MLM of Devlin et al. (2018) include the use of text streams of an ar- bitrary number of sentences (truncated at 256 to- kens) instead of pairs of sentences. To counter the imbalance between rare and frequent tokens (e.g. punctuations or stop words), we also subsample the frequent outputs using an approach similar to Mikolov et al. (2013b): tokens in a text stream are sampled according to a multinomial distribution, whose weights are proportional to the square root of their invert frequencies. Our MLM objective is illustrated in Figure 1.
## Compute Infrastructure
### Hardware and Software
The developers write:
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details. | {"language": ["multilingual", "en", "fr", "es", "de", "el", "bg", "ru", "tr", "ar", "vi", "th", "zh", "hi", "sw", "ur"], "license": "cc-by-nc-4.0"} | FacebookAI/xlm-mlm-xnli15-1024 | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm",
"fill-mask",
"multilingual",
"en",
"fr",
"es",
"de",
"el",
"bg",
"ru",
"tr",
"ar",
"vi",
"th",
"zh",
"hi",
"sw",
"ur",
"arxiv:1901.07291",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1901.07291",
"1910.09700"
] | [
"multilingual",
"en",
"fr",
"es",
"de",
"el",
"bg",
"ru",
"tr",
"ar",
"vi",
"th",
"zh",
"hi",
"sw",
"ur"
] | TAGS
#transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #fr #es #de #el #bg #ru #tr #ar #vi #th #zh #hi #sw #ur #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
| xlm-mlm-xnli15-1024
===================
Table of Contents
=================
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training Details
5. Evaluation
6. Environmental Impact
7. Technical Specifications
8. Citation
9. Model Card Authors
10. How To Get Started With the Model
Model Details
=============
The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective fine-tuned on the English NLI dataset. The model developers evaluated the capacity of the model to make correct predictions in all 15 XNLI languages (see the XNLI data card for further information on XNLI).
Model Description
-----------------
* Developed by: Guillaume Lample, Alexis Conneau, see associated paper
* Model type: Language model
* Language(s) (NLP): English; evaluated in 15 languages (see XNLI data card)
* License: CC-BY-NC-4.0
* Related Models: XLM models
* Resources for more information:
+ Associated paper
+ GitHub Repo for XLM
+ GitHub Repo for XNLI
+ XNLI data card
+ Hugging Face Multilingual Models for Inference docs
Uses
====
Direct Use
----------
The model is a language model. The model can be used for cross-lingual text classification. Though the model is fine-tuned based on English text data, the model's ability to classify sentences in 14 other languages has been evaluated (see Evaluation).
Downstream Use
--------------
This model can be used for downstream tasks related to natural language inference in different languages. For more information, see the associated paper.
Out-of-Scope Use
----------------
The model should not be used to intentionally create hostile or alienating environments for people.
Bias, Risks, and Limitations
============================
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
Recommendations
---------------
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Training Details
================
Training details are culled from the associated paper. See the paper for links, citations, and further details. Also see the associated GitHub Repo for further details.
Training Data
-------------
The model developers write:
>
> We use WikiExtractor2 to extract raw sentences from Wikipedia dumps and use them as mono-lingual data for the CLM and MLM objectives. For the TLM objective, we only use parallel data that involves English, similar to Conneau et al. (2018b).
>
>
> * Precisely, we use MultiUN (Ziemski et al., 2016) for French, Spanish, Russian, Arabic and Chinese, and the IIT Bombay corpus (Anoop et al., 2018) for Hindi.
> * We extract the following corpora from the OPUS 3 website Tiedemann (2012): the EUbookshop corpus for German, Greek and Bulgarian, OpenSubtitles 2018 for Turkish, Vietnamese and Thai, Tanzil for both Urdu and Swahili and GlobalVoices for Swahili.
> * For Chinese, Japanese and Thai we use the tokenizer of Chang et al. (2008), the Kytea4 tokenizer, and the PyThaiNLP5 tokenizer respectively.
> * For all other languages, we use the tokenizer provided by Moses (Koehn et al., 2007), falling back on the default English tokenizer when necessary.
>
>
>
For fine-tuning, the developers used the English NLI dataset (see the XNLI data card).
Training Procedure
------------------
### Preprocessing
The model developers write:
>
> We use fastBPE to learn BPE codes and split words into subword units. The BPE codes are learned on the concatenation of sentences sampled from all languages, following the method presented in Section 3.1.
>
>
>
### Speeds, Sizes, Times
The model developers write:
>
> We use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam optimizer (Kingma and Ba, 2014), a linear warm-up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.
>
>
> For the CLM and MLM objectives, we use streams of 256 tokens and a mini-batches of size 64. Unlike Devlin et al. (2018), a sequence in a mini-batch can contain more than two consecutive sentences, as explained in Section 3.2. For the TLM objective, we sample mini-batches of 4000 tokens composed of sentences with similar lengths. We use the averaged perplexity over languages as a stopping criterion for training. For machine translation, we only use 6 layers, and we create mini-batches of 2000 tokens.
>
>
> When fine-tuning on XNLI, we use mini-batches of size 8 or 16, and we clip the sentence length to 256 words. We use 80k BPE splits and a vocabulary of 95k and train a 12-layer model on the Wikipedias of the XNLI languages. We sample the learning rate of the Adam optimizer with values from 5.10−4 to 2.10−4, and use small evaluation epochs of 20000 random samples. We use the first hidden state of the last layer of the transformer as input to the randomly initialized final linear classifier, and fine-tune all parameters. In our experiments, using either max-pooling or mean-pooling over the last layer did not work bet- ter than using the first hidden state.
>
>
> We implement all our models in Py-Torch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
>
>
>
Evaluation
==========
Testing Data, Factors & Metrics
-------------------------------
After fine-tuning the model on the English NLI dataset, the model developers evaluated the capacity of the model to make correct predictions in the 15 XNLI languages using the XNLI data and the metric of test accuracy.See the associated paper for further details.
Results
-------
Environmental Impact
====================
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
* Hardware Type: 64 Volta GPUs
* Hours used: More information needed
* Cloud Provider: More information needed
* Compute Region: More information needed
* Carbon Emitted: More information needed
Technical Specifications
========================
Details are culled from the associated paper. See the paper for links, citations, and further details. Also see the associated GitHub Repo for further details.
Model Architecture and Objective
--------------------------------
xlm-mlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective fine-tuned on the English NLI dataset. About the MLM objective, the developers write:
>
> We also consider the masked language model- ing (MLM) objective of Devlin et al. (2018), also known as the Cloze task (Taylor, 1953). Follow- ing Devlin et al. (2018), we sample randomly 15% of the BPE tokens from the text streams, replace them by a [MASK] token 80% of the time, by a random token 10% of the time, and we keep them unchanged 10% of the time. Differences be- tween our approach and the MLM of Devlin et al. (2018) include the use of text streams of an ar- bitrary number of sentences (truncated at 256 to- kens) instead of pairs of sentences. To counter the imbalance between rare and frequent tokens (e.g. punctuations or stop words), we also subsample the frequent outputs using an approach similar to Mikolov et al. (2013b): tokens in a text stream are sampled according to a multinomial distribution, whose weights are proportional to the square root of their invert frequencies. Our MLM objective is illustrated in Figure 1.
>
>
>
Compute Infrastructure
----------------------
### Hardware and Software
The developers write:
>
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
>
>
>
BibTeX:
APA:
* Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
Model Card Authors
==================
This model card was written by the team at Hugging Face.
How to Get Started with the Model
=================================
This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details.
| [
"### Preprocessing\n\n\nThe model developers write:\n\n\n\n> \n> We use fastBPE to learn BPE codes and split words into subword units. The BPE codes are learned on the concatenation of sentences sampled from all languages, following the method presented in Section 3.1.\n> \n> \n>",
"### Speeds, Sizes, Times\n\n\nThe model developers write:\n\n\n\n> \n> We use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam optimizer (Kingma and Ba, 2014), a linear warm-up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.\n> \n> \n> For the CLM and MLM objectives, we use streams of 256 tokens and a mini-batches of size 64. Unlike Devlin et al. (2018), a sequence in a mini-batch can contain more than two consecutive sentences, as explained in Section 3.2. For the TLM objective, we sample mini-batches of 4000 tokens composed of sentences with similar lengths. We use the averaged perplexity over languages as a stopping criterion for training. For machine translation, we only use 6 layers, and we create mini-batches of 2000 tokens.\n> \n> \n> When fine-tuning on XNLI, we use mini-batches of size 8 or 16, and we clip the sentence length to 256 words. We use 80k BPE splits and a vocabulary of 95k and train a 12-layer model on the Wikipedias of the XNLI languages. We sample the learning rate of the Adam optimizer with values from 5.10−4 to 2.10−4, and use small evaluation epochs of 20000 random samples. We use the first hidden state of the last layer of the transformer as input to the randomly initialized final linear classifier, and fine-tune all parameters. In our experiments, using either max-pooling or mean-pooling over the last layer did not work bet- ter than using the first hidden state.\n> \n> \n> We implement all our models in Py-Torch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.\n> \n> \n> \n\n\nEvaluation\n==========\n\n\nTesting Data, Factors & Metrics\n-------------------------------\n\n\nAfter fine-tuning the model on the English NLI dataset, the model developers evaluated the capacity of the model to make correct predictions in the 15 XNLI languages using the XNLI data and the metric of test accuracy.See the associated paper for further details.\n\n\nResults\n-------\n\n\n\nEnvironmental Impact\n====================\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: 64 Volta GPUs\n* Hours used: More information needed\n* Cloud Provider: More information needed\n* Compute Region: More information needed\n* Carbon Emitted: More information needed\n\n\nTechnical Specifications\n========================\n\n\nDetails are culled from the associated paper. See the paper for links, citations, and further details. Also see the associated GitHub Repo for further details.\n\n\nModel Architecture and Objective\n--------------------------------\n\n\nxlm-mlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective fine-tuned on the English NLI dataset. About the MLM objective, the developers write:\n\n\n\n> \n> We also consider the masked language model- ing (MLM) objective of Devlin et al. (2018), also known as the Cloze task (Taylor, 1953). Follow- ing Devlin et al. (2018), we sample randomly 15% of the BPE tokens from the text streams, replace them by a [MASK] token 80% of the time, by a random token 10% of the time, and we keep them unchanged 10% of the time. Differences be- tween our approach and the MLM of Devlin et al. (2018) include the use of text streams of an ar- bitrary number of sentences (truncated at 256 to- kens) instead of pairs of sentences. To counter the imbalance between rare and frequent tokens (e.g. punctuations or stop words), we also subsample the frequent outputs using an approach similar to Mikolov et al. (2013b): tokens in a text stream are sampled according to a multinomial distribution, whose weights are proportional to the square root of their invert frequencies. Our MLM objective is illustrated in Figure 1.\n> \n> \n> \n\n\nCompute Infrastructure\n----------------------",
"### Hardware and Software\n\n\nThe developers write:\n\n\n\n> \n> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.\n> \n> \n> \n\n\nBibTeX:\n\n\nAPA:\n\n\n* Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.\n\n\nModel Card Authors\n==================\n\n\nThis model card was written by the team at Hugging Face.\n\n\nHow to Get Started with the Model\n=================================\n\n\nThis model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details."
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #xlm #fill-mask #multilingual #en #fr #es #de #el #bg #ru #tr #ar #vi #th #zh #hi #sw #ur #arxiv-1901.07291 #arxiv-1910.09700 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Preprocessing\n\n\nThe model developers write:\n\n\n\n> \n> We use fastBPE to learn BPE codes and split words into subword units. The BPE codes are learned on the concatenation of sentences sampled from all languages, following the method presented in Section 3.1.\n> \n> \n>",
"### Speeds, Sizes, Times\n\n\nThe model developers write:\n\n\n\n> \n> We use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam optimizer (Kingma and Ba, 2014), a linear warm-up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.\n> \n> \n> For the CLM and MLM objectives, we use streams of 256 tokens and a mini-batches of size 64. Unlike Devlin et al. (2018), a sequence in a mini-batch can contain more than two consecutive sentences, as explained in Section 3.2. For the TLM objective, we sample mini-batches of 4000 tokens composed of sentences with similar lengths. We use the averaged perplexity over languages as a stopping criterion for training. For machine translation, we only use 6 layers, and we create mini-batches of 2000 tokens.\n> \n> \n> When fine-tuning on XNLI, we use mini-batches of size 8 or 16, and we clip the sentence length to 256 words. We use 80k BPE splits and a vocabulary of 95k and train a 12-layer model on the Wikipedias of the XNLI languages. We sample the learning rate of the Adam optimizer with values from 5.10−4 to 2.10−4, and use small evaluation epochs of 20000 random samples. We use the first hidden state of the last layer of the transformer as input to the randomly initialized final linear classifier, and fine-tune all parameters. In our experiments, using either max-pooling or mean-pooling over the last layer did not work bet- ter than using the first hidden state.\n> \n> \n> We implement all our models in Py-Torch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.\n> \n> \n> \n\n\nEvaluation\n==========\n\n\nTesting Data, Factors & Metrics\n-------------------------------\n\n\nAfter fine-tuning the model on the English NLI dataset, the model developers evaluated the capacity of the model to make correct predictions in the 15 XNLI languages using the XNLI data and the metric of test accuracy.See the associated paper for further details.\n\n\nResults\n-------\n\n\n\nEnvironmental Impact\n====================\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type: 64 Volta GPUs\n* Hours used: More information needed\n* Cloud Provider: More information needed\n* Compute Region: More information needed\n* Carbon Emitted: More information needed\n\n\nTechnical Specifications\n========================\n\n\nDetails are culled from the associated paper. See the paper for links, citations, and further details. Also see the associated GitHub Repo for further details.\n\n\nModel Architecture and Objective\n--------------------------------\n\n\nxlm-mlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective fine-tuned on the English NLI dataset. About the MLM objective, the developers write:\n\n\n\n> \n> We also consider the masked language model- ing (MLM) objective of Devlin et al. (2018), also known as the Cloze task (Taylor, 1953). Follow- ing Devlin et al. (2018), we sample randomly 15% of the BPE tokens from the text streams, replace them by a [MASK] token 80% of the time, by a random token 10% of the time, and we keep them unchanged 10% of the time. Differences be- tween our approach and the MLM of Devlin et al. (2018) include the use of text streams of an ar- bitrary number of sentences (truncated at 256 to- kens) instead of pairs of sentences. To counter the imbalance between rare and frequent tokens (e.g. punctuations or stop words), we also subsample the frequent outputs using an approach similar to Mikolov et al. (2013b): tokens in a text stream are sampled according to a multinomial distribution, whose weights are proportional to the square root of their invert frequencies. Our MLM objective is illustrated in Figure 1.\n> \n> \n> \n\n\nCompute Infrastructure\n----------------------",
"### Hardware and Software\n\n\nThe developers write:\n\n\n\n> \n> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.\n> \n> \n> \n\n\nBibTeX:\n\n\nAPA:\n\n\n* Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.\n\n\nModel Card Authors\n==================\n\n\nThis model card was written by the team at Hugging Face.\n\n\nHow to Get Started with the Model\n=================================\n\n\nThis model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details."
] |
fill-mask | transformers |
# XLM-RoBERTa (base-sized model)
XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/xlmr).
Disclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
XLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages.
RoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=xlm-roberta) to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.
## Usage
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='xlm-roberta-base')
>>> unmasker("Hello I'm a <mask> model.")
[{'score': 0.10563907772302628,
'sequence': "Hello I'm a fashion model.",
'token': 54543,
'token_str': 'fashion'},
{'score': 0.08015287667512894,
'sequence': "Hello I'm a new model.",
'token': 3525,
'token_str': 'new'},
{'score': 0.033413201570510864,
'sequence': "Hello I'm a model model.",
'token': 3299,
'token_str': 'model'},
{'score': 0.030217764899134636,
'sequence': "Hello I'm a French model.",
'token': 92265,
'token_str': 'French'},
{'score': 0.026436051353812218,
'sequence': "Hello I'm a sexy model.",
'token': 17473,
'token_str': 'sexy'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base')
model = AutoModelForMaskedLM.from_pretrained("xlm-roberta-base")
# prepare input
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
# forward pass
output = model(**encoded_input)
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1911-02116,
author = {Alexis Conneau and
Kartikay Khandelwal and
Naman Goyal and
Vishrav Chaudhary and
Guillaume Wenzek and
Francisco Guzm{\'{a}}n and
Edouard Grave and
Myle Ott and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {Unsupervised Cross-lingual Representation Learning at Scale},
journal = {CoRR},
volume = {abs/1911.02116},
year = {2019},
url = {http://arxiv.org/abs/1911.02116},
eprinttype = {arXiv},
eprint = {1911.02116},
timestamp = {Mon, 11 Nov 2019 18:38:09 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1911-02116.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=xlm-roberta-base">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": ["multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", false, "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh"], "license": "mit", "tags": ["exbert"]} | FacebookAI/xlm-roberta-base | null | [
"transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"xlm-roberta",
"fill-mask",
"exbert",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:1911.02116",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1911.02116"
] | [
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh"
] | TAGS
#transformers #pytorch #tf #jax #onnx #safetensors #xlm-roberta #fill-mask #exbert #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #no #om #or #pa #pl #ps #pt #ro #ru #sa #sd #si #sk #sl #so #sq #sr #su #sv #sw #ta #te #th #tl #tr #ug #uk #ur #uz #vi #xh #yi #zh #arxiv-1911.02116 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# XLM-RoBERTa (base-sized model)
XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. and first released in this repository.
Disclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
XLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages.
RoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.
## Usage
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# XLM-RoBERTa (base-sized model) \n\nXLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. and first released in this repository. \n\nDisclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nXLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. \n\nRoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.\n\nMore precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.\n\nThis way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.",
"## Usage\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #tf #jax #onnx #safetensors #xlm-roberta #fill-mask #exbert #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #no #om #or #pa #pl #ps #pt #ro #ru #sa #sd #si #sk #sl #so #sq #sr #su #sv #sw #ta #te #th #tl #tr #ug #uk #ur #uz #vi #xh #yi #zh #arxiv-1911.02116 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# XLM-RoBERTa (base-sized model) \n\nXLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. and first released in this repository. \n\nDisclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nXLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. \n\nRoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.\n\nMore precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.\n\nThis way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.",
"## Usage\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
fill-mask | transformers |
# xlm-roberta-large-finetuned-conll02-dutch
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) fine-tuned with the [CoNLL-2002](https://huggingface.co/datasets/conll2002) dataset in Dutch.
- **Developed by:** See [associated paper](https://arxiv.org/abs/1911.02116)
- **Model type:** Multi-lingual language model
- **Language(s) (NLP):** XLM-RoBERTa is a multilingual model trained on 100 different languages; see [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) for full list; model is fine-tuned on a dataset in Dutch
- **License:** More information needed
- **Related Models:** [RoBERTa](https://huggingface.co/roberta-base), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm)
- **Parent Model:** [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large)
- **Resources for more information:**
-[GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr)
-[Associated Paper](https://arxiv.org/abs/1911.02116)
-[CoNLL-2002 data card](https://huggingface.co/datasets/conll2002)
# Uses
## Direct Use
The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.
## Downstream Use
Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face [token classification docs](https://huggingface.co/tasks/token-classification).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
**CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
See the following resources for training data and training procedure details:
- [XLM-RoBERTa-large model card](https://huggingface.co/xlm-roberta-large)
- [CoNLL-2002 data card](https://huggingface.co/datasets/conll2002)
- [Associated paper](https://arxiv.org/pdf/1911.02116.pdf)
# Evaluation
See the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for evaluation details.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 500 32GB Nvidia V100 GPUs (from the [associated paper](https://arxiv.org/pdf/1911.02116.pdf))
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
See the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for further details.
# Citation
**BibTeX:**
```bibtex
@article{conneau2019unsupervised,
title={Unsupervised Cross-lingual Representation Learning at Scale},
author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1911.02116},
year={2019}
}
```
**APA:**
- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly within a pipeline for NER.
<details>
<summary> Click to expand </summary>
```python
>>> from transformers import AutoTokenizer, AutoModelForTokenClassification
>>> from transformers import pipeline
>>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large-finetuned-conll02-dutch")
>>> model = AutoModelForTokenClassification.from_pretrained("xlm-roberta-large-finetuned-conll02-dutch")
>>> classifier = pipeline("ner", model=model, tokenizer=tokenizer)
>>> classifier("Mijn naam is Emma en ik woon in Londen.")
[{'end': 17,
'entity': 'B-PER',
'index': 4,
'score': 0.9999807,
'start': 13,
'word': '▁Emma'},
{'end': 36,
'entity': 'B-LOC',
'index': 9,
'score': 0.9999871,
'start': 32,
'word': '▁Lond'}]
```
</details>
| {"language": ["multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", false, "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh"]} | FacebookAI/xlm-roberta-large-finetuned-conll02-dutch | null | [
"transformers",
"pytorch",
"rust",
"xlm-roberta",
"fill-mask",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:1911.02116",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1911.02116",
"1910.09700"
] | [
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh"
] | TAGS
#transformers #pytorch #rust #xlm-roberta #fill-mask #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #no #om #or #pa #pl #ps #pt #ro #ru #sa #sd #si #sk #sl #so #sq #sr #su #sv #sw #ta #te #th #tl #tr #ug #uk #ur #uz #vi #xh #yi #zh #arxiv-1911.02116 #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# xlm-roberta-large-finetuned-conll02-dutch
# Table of Contents
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training
5. Evaluation
6. Environmental Impact
7. Technical Specifications
8. Citation
9. Model Card Authors
10. How To Get Started With the Model
# Model Details
## Model Description
The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the CoNLL-2002 dataset in Dutch.
- Developed by: See associated paper
- Model type: Multi-lingual language model
- Language(s) (NLP): XLM-RoBERTa is a multilingual model trained on 100 different languages; see GitHub Repo for full list; model is fine-tuned on a dataset in Dutch
- License: More information needed
- Related Models: RoBERTa, XLM
- Parent Model: XLM-RoBERTa-large
- Resources for more information:
-GitHub Repo
-Associated Paper
-CoNLL-2002 data card
# Uses
## Direct Use
The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.
## Downstream Use
Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
See the following resources for training data and training procedure details:
- XLM-RoBERTa-large model card
- CoNLL-2002 data card
- Associated paper
# Evaluation
See the associated paper for evaluation details.
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: 500 32GB Nvidia V100 GPUs (from the associated paper)
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications
See the associated paper for further details.
BibTeX:
APA:
- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly within a pipeline for NER.
<details>
<summary> Click to expand </summary>
</details>
| [
"# xlm-roberta-large-finetuned-conll02-dutch",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the CoNLL-2002 dataset in Dutch.\n\n- Developed by: See associated paper\n- Model type: Multi-lingual language model\n- Language(s) (NLP): XLM-RoBERTa is a multilingual model trained on 100 different languages; see GitHub Repo for full list; model is fine-tuned on a dataset in Dutch\n- License: More information needed\n- Related Models: RoBERTa, XLM\n - Parent Model: XLM-RoBERTa-large\n- Resources for more information: \n -GitHub Repo\n -Associated Paper\n -CoNLL-2002 data card",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.",
"## Downstream Use\n\nPotential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nCONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training\n\nSee the following resources for training data and training procedure details: \n- XLM-RoBERTa-large model card\n- CoNLL-2002 data card\n- Associated paper",
"# Evaluation\n\nSee the associated paper for evaluation details.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: 500 32GB Nvidia V100 GPUs (from the associated paper)\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nSee the associated paper for further details.\n\nBibTeX:\n\n\n\nAPA:\n- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model. You can use this model directly within a pipeline for NER.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\n</details>"
] | [
"TAGS\n#transformers #pytorch #rust #xlm-roberta #fill-mask #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #no #om #or #pa #pl #ps #pt #ro #ru #sa #sd #si #sk #sl #so #sq #sr #su #sv #sw #ta #te #th #tl #tr #ug #uk #ur #uz #vi #xh #yi #zh #arxiv-1911.02116 #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# xlm-roberta-large-finetuned-conll02-dutch",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the CoNLL-2002 dataset in Dutch.\n\n- Developed by: See associated paper\n- Model type: Multi-lingual language model\n- Language(s) (NLP): XLM-RoBERTa is a multilingual model trained on 100 different languages; see GitHub Repo for full list; model is fine-tuned on a dataset in Dutch\n- License: More information needed\n- Related Models: RoBERTa, XLM\n - Parent Model: XLM-RoBERTa-large\n- Resources for more information: \n -GitHub Repo\n -Associated Paper\n -CoNLL-2002 data card",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.",
"## Downstream Use\n\nPotential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nCONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training\n\nSee the following resources for training data and training procedure details: \n- XLM-RoBERTa-large model card\n- CoNLL-2002 data card\n- Associated paper",
"# Evaluation\n\nSee the associated paper for evaluation details.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: 500 32GB Nvidia V100 GPUs (from the associated paper)\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nSee the associated paper for further details.\n\nBibTeX:\n\n\n\nAPA:\n- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model. You can use this model directly within a pipeline for NER.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\n</details>"
] |
fill-mask | transformers |
# xlm-roberta-large-finetuned-conll02-spanish
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) fine-tuned with the [CoNLL-2002](https://huggingface.co/datasets/conll2002) dataset in Spanish.
- **Developed by:** See [associated paper](https://arxiv.org/abs/1911.02116)
- **Model type:** Multi-lingual language model
- **Language(s) (NLP):** XLM-RoBERTa is a multilingual model trained on 100 different languages; see [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) for full list; model is fine-tuned on a dataset in Spanish.
- **License:** More information needed
- **Related Models:** [RoBERTa](https://huggingface.co/roberta-base), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm)
- **Parent Model:** [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large)
- **Resources for more information:**
-[GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr)
-[Associated Paper](https://arxiv.org/abs/1911.02116)
-[CoNLL-2002 data card](https://huggingface.co/datasets/conll2002)
# Uses
## Direct Use
The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.
## Downstream Use
Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face [token classification docs](https://huggingface.co/tasks/token-classification).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
**CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
See the following resources for training data and training procedure details:
- [XLM-RoBERTa-large model card](https://huggingface.co/xlm-roberta-large)
- [CoNLL-2002 data card](https://huggingface.co/datasets/conll2002)
- [Associated paper](https://arxiv.org/pdf/1911.02116.pdf)
# Evaluation
See the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for evaluation details.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 500 32GB Nvidia V100 GPUs (from the [associated paper](https://arxiv.org/pdf/1911.02116.pdf))
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
See the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for further details.
# Citation
**BibTeX:**
```bibtex
@article{conneau2019unsupervised,
title={Unsupervised Cross-lingual Representation Learning at Scale},
author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1911.02116},
year={2019}
}
```
**APA:**
- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly within a pipeline for NER.
<details>
<summary> Click to expand </summary>
```python
>>> from transformers import AutoTokenizer, AutoModelForTokenClassification
>>> from transformers import pipeline
>>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large-finetuned-conll02-spanish")
>>> model = AutoModelForTokenClassification.from_pretrained("xlm-roberta-large-finetuned-conll02-spanish")
>>> classifier = pipeline("ner", model=model, tokenizer=tokenizer)
>>> classifier("Efectuaba un vuelo entre bombay y nueva york.")
[{'end': 30,
'entity': 'B-LOC',
'index': 7,
'score': 0.95703226,
'start': 25,
'word': '▁bomba'},
{'end': 39,
'entity': 'B-LOC',
'index': 10,
'score': 0.9771854,
'start': 34,
'word': '▁nueva'},
{'end': 43,
'entity': 'I-LOC',
'index': 11,
'score': 0.9914097,
'start': 40,
'word': '▁yor'}]
```
</details>
| {"language": ["multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", false, "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh"]} | FacebookAI/xlm-roberta-large-finetuned-conll02-spanish | null | [
"transformers",
"pytorch",
"rust",
"safetensors",
"xlm-roberta",
"fill-mask",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:1911.02116",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1911.02116",
"1910.09700"
] | [
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh"
] | TAGS
#transformers #pytorch #rust #safetensors #xlm-roberta #fill-mask #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #no #om #or #pa #pl #ps #pt #ro #ru #sa #sd #si #sk #sl #so #sq #sr #su #sv #sw #ta #te #th #tl #tr #ug #uk #ur #uz #vi #xh #yi #zh #arxiv-1911.02116 #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# xlm-roberta-large-finetuned-conll02-spanish
# Table of Contents
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training
5. Evaluation
6. Environmental Impact
7. Technical Specifications
8. Citation
9. Model Card Authors
10. How To Get Started With the Model
# Model Details
## Model Description
The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the CoNLL-2002 dataset in Spanish.
- Developed by: See associated paper
- Model type: Multi-lingual language model
- Language(s) (NLP): XLM-RoBERTa is a multilingual model trained on 100 different languages; see GitHub Repo for full list; model is fine-tuned on a dataset in Spanish.
- License: More information needed
- Related Models: RoBERTa, XLM
- Parent Model: XLM-RoBERTa-large
- Resources for more information:
-GitHub Repo
-Associated Paper
-CoNLL-2002 data card
# Uses
## Direct Use
The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.
## Downstream Use
Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
See the following resources for training data and training procedure details:
- XLM-RoBERTa-large model card
- CoNLL-2002 data card
- Associated paper
# Evaluation
See the associated paper for evaluation details.
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: 500 32GB Nvidia V100 GPUs (from the associated paper)
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications
See the associated paper for further details.
BibTeX:
APA:
- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly within a pipeline for NER.
<details>
<summary> Click to expand </summary>
</details>
| [
"# xlm-roberta-large-finetuned-conll02-spanish",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the CoNLL-2002 dataset in Spanish.\n\n- Developed by: See associated paper\n- Model type: Multi-lingual language model\n- Language(s) (NLP): XLM-RoBERTa is a multilingual model trained on 100 different languages; see GitHub Repo for full list; model is fine-tuned on a dataset in Spanish.\n- License: More information needed\n- Related Models: RoBERTa, XLM\n - Parent Model: XLM-RoBERTa-large\n- Resources for more information: \n -GitHub Repo\n -Associated Paper\n -CoNLL-2002 data card",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.",
"## Downstream Use\n\nPotential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nCONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training\n\nSee the following resources for training data and training procedure details: \n- XLM-RoBERTa-large model card\n- CoNLL-2002 data card\n- Associated paper",
"# Evaluation\n\nSee the associated paper for evaluation details.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: 500 32GB Nvidia V100 GPUs (from the associated paper)\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nSee the associated paper for further details.\n\nBibTeX:\n\n\n\nAPA:\n- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model. You can use this model directly within a pipeline for NER.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\n</details>"
] | [
"TAGS\n#transformers #pytorch #rust #safetensors #xlm-roberta #fill-mask #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #no #om #or #pa #pl #ps #pt #ro #ru #sa #sd #si #sk #sl #so #sq #sr #su #sv #sw #ta #te #th #tl #tr #ug #uk #ur #uz #vi #xh #yi #zh #arxiv-1911.02116 #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# xlm-roberta-large-finetuned-conll02-spanish",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the CoNLL-2002 dataset in Spanish.\n\n- Developed by: See associated paper\n- Model type: Multi-lingual language model\n- Language(s) (NLP): XLM-RoBERTa is a multilingual model trained on 100 different languages; see GitHub Repo for full list; model is fine-tuned on a dataset in Spanish.\n- License: More information needed\n- Related Models: RoBERTa, XLM\n - Parent Model: XLM-RoBERTa-large\n- Resources for more information: \n -GitHub Repo\n -Associated Paper\n -CoNLL-2002 data card",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.",
"## Downstream Use\n\nPotential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nCONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training\n\nSee the following resources for training data and training procedure details: \n- XLM-RoBERTa-large model card\n- CoNLL-2002 data card\n- Associated paper",
"# Evaluation\n\nSee the associated paper for evaluation details.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: 500 32GB Nvidia V100 GPUs (from the associated paper)\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nSee the associated paper for further details.\n\nBibTeX:\n\n\n\nAPA:\n- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model. You can use this model directly within a pipeline for NER.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\n</details>"
] |
token-classification | transformers |
# xlm-roberta-large-finetuned-conll03-english
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) fine-tuned with the [conll2003](https://huggingface.co/datasets/conll2003) dataset in English.
- **Developed by:** See [associated paper](https://arxiv.org/abs/1911.02116)
- **Model type:** Multi-lingual language model
- **Language(s) (NLP) or Countries (images):** XLM-RoBERTa is a multilingual model trained on 100 different languages; see [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) for full list; model is fine-tuned on a dataset in English
- **License:** More information needed
- **Related Models:** [RoBERTa](https://huggingface.co/roberta-base), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm)
- **Parent Model:** [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large)
- **Resources for more information:**
-[GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr)
-[Associated Paper](https://arxiv.org/abs/1911.02116)
# Uses
## Direct Use
The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.
## Downstream Use
Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face [token classification docs](https://huggingface.co/tasks/token-classification).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
**CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). In the context of tasks relevant to this model, [Mishra et al. (2020)](https://arxiv.org/pdf/2008.03415.pdf) explore social biases in NER systems for English and find that there is systematic bias in existing NER systems in that they fail to identify named entities from different demographic groups (though this paper did not look at BERT). For example, using a sample sentence from [Mishra et al. (2020)](https://arxiv.org/pdf/2008.03415.pdf):
```python
>>> from transformers import pipeline
>>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large-finetuned-conll03-english")
>>> model = AutoModelForTokenClassification.from_pretrained("xlm-roberta-large-finetuned-conll03-english")
>>> classifier = pipeline("ner", model=model, tokenizer=tokenizer)
>>> classifier("Alya told Jasmine that Andrew could pay with cash..")
[{'end': 2,
'entity': 'I-PER',
'index': 1,
'score': 0.9997861,
'start': 0,
'word': '▁Al'},
{'end': 4,
'entity': 'I-PER',
'index': 2,
'score': 0.9998591,
'start': 2,
'word': 'ya'},
{'end': 16,
'entity': 'I-PER',
'index': 4,
'score': 0.99995816,
'start': 10,
'word': '▁Jasmin'},
{'end': 17,
'entity': 'I-PER',
'index': 5,
'score': 0.9999584,
'start': 16,
'word': 'e'},
{'end': 29,
'entity': 'I-PER',
'index': 7,
'score': 0.99998057,
'start': 23,
'word': '▁Andrew'}]
```
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
See the following resources for training data and training procedure details:
- [XLM-RoBERTa-large model card](https://huggingface.co/xlm-roberta-large)
- [CoNLL-2003 data card](https://huggingface.co/datasets/conll2003)
- [Associated paper](https://arxiv.org/pdf/1911.02116.pdf)
# Evaluation
See the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for evaluation details.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 500 32GB Nvidia V100 GPUs (from the [associated paper](https://arxiv.org/pdf/1911.02116.pdf))
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
See the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for further details.
# Citation
**BibTeX:**
```bibtex
@article{conneau2019unsupervised,
title={Unsupervised Cross-lingual Representation Learning at Scale},
author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1911.02116},
year={2019}
}
```
**APA:**
- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly within a pipeline for NER.
<details>
<summary> Click to expand </summary>
```python
>>> from transformers import AutoTokenizer, AutoModelForTokenClassification
>>> from transformers import pipeline
>>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large-finetuned-conll03-english")
>>> model = AutoModelForTokenClassification.from_pretrained("xlm-roberta-large-finetuned-conll03-english")
>>> classifier = pipeline("ner", model=model, tokenizer=tokenizer)
>>> classifier("Hello I'm Omar and I live in Zürich.")
[{'end': 14,
'entity': 'I-PER',
'index': 5,
'score': 0.9999175,
'start': 10,
'word': '▁Omar'},
{'end': 35,
'entity': 'I-LOC',
'index': 10,
'score': 0.9999906,
'start': 29,
'word': '▁Zürich'}]
```
</details> | {"language": ["multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", false, "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh"]} | FacebookAI/xlm-roberta-large-finetuned-conll03-english | null | [
"transformers",
"pytorch",
"rust",
"onnx",
"safetensors",
"xlm-roberta",
"token-classification",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:1911.02116",
"arxiv:2008.03415",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1911.02116",
"2008.03415",
"1910.09700"
] | [
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh"
] | TAGS
#transformers #pytorch #rust #onnx #safetensors #xlm-roberta #token-classification #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #no #om #or #pa #pl #ps #pt #ro #ru #sa #sd #si #sk #sl #so #sq #sr #su #sv #sw #ta #te #th #tl #tr #ug #uk #ur #uz #vi #xh #yi #zh #arxiv-1911.02116 #arxiv-2008.03415 #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# xlm-roberta-large-finetuned-conll03-english
# Table of Contents
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training
5. Evaluation
6. Environmental Impact
7. Technical Specifications
8. Citation
9. Model Card Authors
10. How To Get Started With the Model
# Model Details
## Model Description
The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the conll2003 dataset in English.
- Developed by: See associated paper
- Model type: Multi-lingual language model
- Language(s) (NLP) or Countries (images): XLM-RoBERTa is a multilingual model trained on 100 different languages; see GitHub Repo for full list; model is fine-tuned on a dataset in English
- License: More information needed
- Related Models: RoBERTa, XLM
- Parent Model: XLM-RoBERTa-large
- Resources for more information:
-GitHub Repo
-Associated Paper
# Uses
## Direct Use
The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.
## Downstream Use
Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). In the context of tasks relevant to this model, Mishra et al. (2020) explore social biases in NER systems for English and find that there is systematic bias in existing NER systems in that they fail to identify named entities from different demographic groups (though this paper did not look at BERT). For example, using a sample sentence from Mishra et al. (2020):
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
See the following resources for training data and training procedure details:
- XLM-RoBERTa-large model card
- CoNLL-2003 data card
- Associated paper
# Evaluation
See the associated paper for evaluation details.
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: 500 32GB Nvidia V100 GPUs (from the associated paper)
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications
See the associated paper for further details.
BibTeX:
APA:
- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly within a pipeline for NER.
<details>
<summary> Click to expand </summary>
</details> | [
"# xlm-roberta-large-finetuned-conll03-english",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the conll2003 dataset in English.\n\n- Developed by: See associated paper\n- Model type: Multi-lingual language model\n- Language(s) (NLP) or Countries (images): XLM-RoBERTa is a multilingual model trained on 100 different languages; see GitHub Repo for full list; model is fine-tuned on a dataset in English\n- License: More information needed\n- Related Models: RoBERTa, XLM\n - Parent Model: XLM-RoBERTa-large\n- Resources for more information: \n -GitHub Repo\n -Associated Paper",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.",
"## Downstream Use\n\nPotential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nCONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). In the context of tasks relevant to this model, Mishra et al. (2020) explore social biases in NER systems for English and find that there is systematic bias in existing NER systems in that they fail to identify named entities from different demographic groups (though this paper did not look at BERT). For example, using a sample sentence from Mishra et al. (2020):",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training\n\nSee the following resources for training data and training procedure details: \n- XLM-RoBERTa-large model card\n- CoNLL-2003 data card\n- Associated paper",
"# Evaluation\n\nSee the associated paper for evaluation details.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: 500 32GB Nvidia V100 GPUs (from the associated paper)\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nSee the associated paper for further details.\n\nBibTeX:\n\n\n\nAPA:\n- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model. You can use this model directly within a pipeline for NER.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\n</details>"
] | [
"TAGS\n#transformers #pytorch #rust #onnx #safetensors #xlm-roberta #token-classification #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #no #om #or #pa #pl #ps #pt #ro #ru #sa #sd #si #sk #sl #so #sq #sr #su #sv #sw #ta #te #th #tl #tr #ug #uk #ur #uz #vi #xh #yi #zh #arxiv-1911.02116 #arxiv-2008.03415 #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# xlm-roberta-large-finetuned-conll03-english",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the conll2003 dataset in English.\n\n- Developed by: See associated paper\n- Model type: Multi-lingual language model\n- Language(s) (NLP) or Countries (images): XLM-RoBERTa is a multilingual model trained on 100 different languages; see GitHub Repo for full list; model is fine-tuned on a dataset in English\n- License: More information needed\n- Related Models: RoBERTa, XLM\n - Parent Model: XLM-RoBERTa-large\n- Resources for more information: \n -GitHub Repo\n -Associated Paper",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.",
"## Downstream Use\n\nPotential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nCONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). In the context of tasks relevant to this model, Mishra et al. (2020) explore social biases in NER systems for English and find that there is systematic bias in existing NER systems in that they fail to identify named entities from different demographic groups (though this paper did not look at BERT). For example, using a sample sentence from Mishra et al. (2020):",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training\n\nSee the following resources for training data and training procedure details: \n- XLM-RoBERTa-large model card\n- CoNLL-2003 data card\n- Associated paper",
"# Evaluation\n\nSee the associated paper for evaluation details.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: 500 32GB Nvidia V100 GPUs (from the associated paper)\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nSee the associated paper for further details.\n\nBibTeX:\n\n\n\nAPA:\n- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model. You can use this model directly within a pipeline for NER.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\n</details>"
] |
token-classification | transformers |
# xlm-roberta-large-finetuned-conll03-german
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) fine-tuned with the [conll2003](https://huggingface.co/datasets/conll2003) dataset in German.
- **Developed by:** See [associated paper](https://arxiv.org/abs/1911.02116)
- **Model type:** Multi-lingual language model
- **Language(s) (NLP):** XLM-RoBERTa is a multilingual model trained on 100 different languages; see [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) for full list; model is fine-tuned on a dataset in German
- **License:** More information needed
- **Related Models:** [RoBERTa](https://huggingface.co/roberta-base), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm)
- **Parent Model:** [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large)
- **Resources for more information:**
-[GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr)
-[Associated Paper](https://arxiv.org/abs/1911.02116)
# Uses
## Direct Use
The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.
## Downstream Use
Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face [token classification docs](https://huggingface.co/tasks/token-classification).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
**CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
See the following resources for training data and training procedure details:
- [XLM-RoBERTa-large model card](https://huggingface.co/xlm-roberta-large)
- [CoNLL-2003 data card](https://huggingface.co/datasets/conll2003)
- [Associated paper](https://arxiv.org/pdf/1911.02116.pdf)
# Evaluation
See the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for evaluation details.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 500 32GB Nvidia V100 GPUs (from the [associated paper](https://arxiv.org/pdf/1911.02116.pdf))
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
See the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for further details.
# Citation
**BibTeX:**
```bibtex
@article{conneau2019unsupervised,
title={Unsupervised Cross-lingual Representation Learning at Scale},
author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1911.02116},
year={2019}
}
```
**APA:**
- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly within a pipeline for NER.
<details>
<summary> Click to expand </summary>
```python
>>> from transformers import AutoTokenizer, AutoModelForTokenClassification
>>> from transformers import pipeline
>>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large-finetuned-conll03-german")
>>> model = AutoModelForTokenClassification.from_pretrained("xlm-roberta-large-finetuned-conll03-german")
>>> classifier = pipeline("ner", model=model, tokenizer=tokenizer)
>>> classifier("Bayern München ist wieder alleiniger Top-Favorit auf den Gewinn der deutschen Fußball-Meisterschaft.")
[{'end': 6,
'entity': 'I-ORG',
'index': 1,
'score': 0.99999166,
'start': 0,
'word': '▁Bayern'},
{'end': 14,
'entity': 'I-ORG',
'index': 2,
'score': 0.999987,
'start': 7,
'word': '▁München'},
{'end': 77,
'entity': 'I-MISC',
'index': 16,
'score': 0.9999728,
'start': 68,
'word': '▁deutschen'}]
```
</details>
| {"language": ["multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", false, "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh"]} | FacebookAI/xlm-roberta-large-finetuned-conll03-german | null | [
"transformers",
"pytorch",
"rust",
"onnx",
"xlm-roberta",
"token-classification",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:1911.02116",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1911.02116",
"1910.09700"
] | [
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh"
] | TAGS
#transformers #pytorch #rust #onnx #xlm-roberta #token-classification #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #no #om #or #pa #pl #ps #pt #ro #ru #sa #sd #si #sk #sl #so #sq #sr #su #sv #sw #ta #te #th #tl #tr #ug #uk #ur #uz #vi #xh #yi #zh #arxiv-1911.02116 #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# xlm-roberta-large-finetuned-conll03-german
# Table of Contents
1. Model Details
2. Uses
3. Bias, Risks, and Limitations
4. Training
5. Evaluation
6. Environmental Impact
7. Technical Specifications
8. Citation
9. Model Card Authors
10. How To Get Started With the Model
# Model Details
## Model Description
The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the conll2003 dataset in German.
- Developed by: See associated paper
- Model type: Multi-lingual language model
- Language(s) (NLP): XLM-RoBERTa is a multilingual model trained on 100 different languages; see GitHub Repo for full list; model is fine-tuned on a dataset in German
- License: More information needed
- Related Models: RoBERTa, XLM
- Parent Model: XLM-RoBERTa-large
- Resources for more information:
-GitHub Repo
-Associated Paper
# Uses
## Direct Use
The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.
## Downstream Use
Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
See the following resources for training data and training procedure details:
- XLM-RoBERTa-large model card
- CoNLL-2003 data card
- Associated paper
# Evaluation
See the associated paper for evaluation details.
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: 500 32GB Nvidia V100 GPUs (from the associated paper)
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications
See the associated paper for further details.
BibTeX:
APA:
- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly within a pipeline for NER.
<details>
<summary> Click to expand </summary>
</details>
| [
"# xlm-roberta-large-finetuned-conll03-german",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the conll2003 dataset in German.\n\n- Developed by: See associated paper\n- Model type: Multi-lingual language model\n- Language(s) (NLP): XLM-RoBERTa is a multilingual model trained on 100 different languages; see GitHub Repo for full list; model is fine-tuned on a dataset in German\n- License: More information needed\n- Related Models: RoBERTa, XLM\n - Parent Model: XLM-RoBERTa-large\n- Resources for more information: \n -GitHub Repo\n -Associated Paper",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.",
"## Downstream Use\n\nPotential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nCONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training\n\nSee the following resources for training data and training procedure details: \n- XLM-RoBERTa-large model card\n- CoNLL-2003 data card\n- Associated paper",
"# Evaluation\n\nSee the associated paper for evaluation details.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: 500 32GB Nvidia V100 GPUs (from the associated paper)\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nSee the associated paper for further details.\n\nBibTeX:\n\n\n\nAPA:\n- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model. You can use this model directly within a pipeline for NER.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\n</details>"
] | [
"TAGS\n#transformers #pytorch #rust #onnx #xlm-roberta #token-classification #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #no #om #or #pa #pl #ps #pt #ro #ru #sa #sd #si #sk #sl #so #sq #sr #su #sv #sw #ta #te #th #tl #tr #ug #uk #ur #uz #vi #xh #yi #zh #arxiv-1911.02116 #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# xlm-roberta-large-finetuned-conll03-german",
"# Table of Contents\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Training\n5. Evaluation\n6. Environmental Impact\n7. Technical Specifications\n8. Citation\n9. Model Card Authors\n10. How To Get Started With the Model",
"# Model Details",
"## Model Description\n\nThe XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the conll2003 dataset in German.\n\n- Developed by: See associated paper\n- Model type: Multi-lingual language model\n- Language(s) (NLP): XLM-RoBERTa is a multilingual model trained on 100 different languages; see GitHub Repo for full list; model is fine-tuned on a dataset in German\n- License: More information needed\n- Related Models: RoBERTa, XLM\n - Parent Model: XLM-RoBERTa-large\n- Resources for more information: \n -GitHub Repo\n -Associated Paper",
"# Uses",
"## Direct Use\n\nThe model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.",
"## Downstream Use\n\nPotential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs.",
"## Out-of-Scope Use\n\nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n\nCONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).",
"## Recommendations\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"# Training\n\nSee the following resources for training data and training procedure details: \n- XLM-RoBERTa-large model card\n- CoNLL-2003 data card\n- Associated paper",
"# Evaluation\n\nSee the associated paper for evaluation details.",
"# Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: 500 32GB Nvidia V100 GPUs (from the associated paper)\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications\n\nSee the associated paper for further details.\n\nBibTeX:\n\n\n\nAPA:\n- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"# Model Card Authors\n\nThis model card was written by the team at Hugging Face.",
"# How to Get Started with the Model\n\nUse the code below to get started with the model. You can use this model directly within a pipeline for NER.\n\n<details>\n<summary> Click to expand </summary>\n\n\n\n</details>"
] |
fill-mask | transformers |
# XLM-RoBERTa (large-sized model)
XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/xlmr).
Disclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
XLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages.
RoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=xlm-roberta) to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.
## Usage
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='xlm-roberta-large')
>>> unmasker("Hello I'm a <mask> model.")
[{'score': 0.10563907772302628,
'sequence': "Hello I'm a fashion model.",
'token': 54543,
'token_str': 'fashion'},
{'score': 0.08015287667512894,
'sequence': "Hello I'm a new model.",
'token': 3525,
'token_str': 'new'},
{'score': 0.033413201570510864,
'sequence': "Hello I'm a model model.",
'token': 3299,
'token_str': 'model'},
{'score': 0.030217764899134636,
'sequence': "Hello I'm a French model.",
'token': 92265,
'token_str': 'French'},
{'score': 0.026436051353812218,
'sequence': "Hello I'm a sexy model.",
'token': 17473,
'token_str': 'sexy'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
model = AutoModelForMaskedLM.from_pretrained("xlm-roberta-large")
# prepare input
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
# forward pass
output = model(**encoded_input)
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1911-02116,
author = {Alexis Conneau and
Kartikay Khandelwal and
Naman Goyal and
Vishrav Chaudhary and
Guillaume Wenzek and
Francisco Guzm{\'{a}}n and
Edouard Grave and
Myle Ott and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {Unsupervised Cross-lingual Representation Learning at Scale},
journal = {CoRR},
volume = {abs/1911.02116},
year = {2019},
url = {http://arxiv.org/abs/1911.02116},
eprinttype = {arXiv},
eprint = {1911.02116},
timestamp = {Mon, 11 Nov 2019 18:38:09 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1911-02116.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=xlm-roberta-base">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": ["multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", false, "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh"], "license": "mit", "tags": ["exbert"]} | FacebookAI/xlm-roberta-large | null | [
"transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"xlm-roberta",
"fill-mask",
"exbert",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:1911.02116",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1911.02116"
] | [
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh"
] | TAGS
#transformers #pytorch #tf #jax #onnx #safetensors #xlm-roberta #fill-mask #exbert #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #no #om #or #pa #pl #ps #pt #ro #ru #sa #sd #si #sk #sl #so #sq #sr #su #sv #sw #ta #te #th #tl #tr #ug #uk #ur #uz #vi #xh #yi #zh #arxiv-1911.02116 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# XLM-RoBERTa (large-sized model)
XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. and first released in this repository.
Disclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
XLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages.
RoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.
## Usage
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# XLM-RoBERTa (large-sized model) \n\nXLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. and first released in this repository. \n\nDisclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nXLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. \n\nRoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.\n\nMore precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.\n\nThis way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.",
"## Usage\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #tf #jax #onnx #safetensors #xlm-roberta #fill-mask #exbert #multilingual #af #am #ar #as #az #be #bg #bn #br #bs #ca #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #hu #hy #id #is #it #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #no #om #or #pa #pl #ps #pt #ro #ru #sa #sd #si #sk #sl #so #sq #sr #su #sv #sw #ta #te #th #tl #tr #ug #uk #ur #uz #vi #xh #yi #zh #arxiv-1911.02116 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# XLM-RoBERTa (large-sized model) \n\nXLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. and first released in this repository. \n\nDisclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nXLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. \n\nRoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.\n\nMore precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.\n\nThis way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.",
"## Usage\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] |
text-generation | transformers |
# XLNet (base-sized model)
XLNet model pre-trained on English language. It was introduced in the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Yang et al. and first released in [this repository](https://github.com/zihangdai/xlnet/).
Disclaimer: The team releasing XLNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking.
## Intended uses & limitations
The model is mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=xlnet) to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.
## Usage
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import XLNetTokenizer, XLNetModel
tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')
model = XLNetModel.from_pretrained('xlnet-base-cased')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1906-08237,
author = {Zhilin Yang and
Zihang Dai and
Yiming Yang and
Jaime G. Carbonell and
Ruslan Salakhutdinov and
Quoc V. Le},
title = {XLNet: Generalized Autoregressive Pretraining for Language Understanding},
journal = {CoRR},
volume = {abs/1906.08237},
year = {2019},
url = {http://arxiv.org/abs/1906.08237},
eprinttype = {arXiv},
eprint = {1906.08237},
timestamp = {Mon, 24 Jun 2019 17:28:45 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1906-08237.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| {"language": "en", "license": "mit", "datasets": ["bookcorpus", "wikipedia"]} | xlnet/xlnet-base-cased | null | [
"transformers",
"pytorch",
"tf",
"rust",
"xlnet",
"text-generation",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1906.08237",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1906.08237"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #rust #xlnet #text-generation #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1906.08237 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# XLNet (base-sized model)
XLNet model pre-trained on English language. It was introduced in the paper XLNet: Generalized Autoregressive Pretraining for Language Understanding by Yang et al. and first released in this repository.
Disclaimer: The team releasing XLNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking.
## Intended uses & limitations
The model is mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.
## Usage
Here is how to use this model to get the features of a given text in PyTorch:
### BibTeX entry and citation info
| [
"# XLNet (base-sized model) \n\nXLNet model pre-trained on English language. It was introduced in the paper XLNet: Generalized Autoregressive Pretraining for Language Understanding by Yang et al. and first released in this repository. \n\nDisclaimer: The team releasing XLNet did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nXLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking.",
"## Intended uses & limitations\n\nThe model is mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.",
"## Usage\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #rust #xlnet #text-generation #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1906.08237 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# XLNet (base-sized model) \n\nXLNet model pre-trained on English language. It was introduced in the paper XLNet: Generalized Autoregressive Pretraining for Language Understanding by Yang et al. and first released in this repository. \n\nDisclaimer: The team releasing XLNet did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nXLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking.",
"## Intended uses & limitations\n\nThe model is mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.",
"## Usage\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### BibTeX entry and citation info"
] |
text-generation | transformers |
# XLNet (large-sized model)
XLNet model pre-trained on English language. It was introduced in the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Yang et al. and first released in [this repository](https://github.com/zihangdai/xlnet/).
Disclaimer: The team releasing XLNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking.
## Intended uses & limitations
The model is mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=xlnet) to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.
## Usage
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import XLNetTokenizer, XLNetModel
tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')
model = XLNetModel.from_pretrained('xlnet-large-cased')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1906-08237,
author = {Zhilin Yang and
Zihang Dai and
Yiming Yang and
Jaime G. Carbonell and
Ruslan Salakhutdinov and
Quoc V. Le},
title = {XLNet: Generalized Autoregressive Pretraining for Language Understanding},
journal = {CoRR},
volume = {abs/1906.08237},
year = {2019},
url = {http://arxiv.org/abs/1906.08237},
eprinttype = {arXiv},
eprint = {1906.08237},
timestamp = {Mon, 24 Jun 2019 17:28:45 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1906-08237.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| {"language": "en", "license": "mit", "datasets": ["bookcorpus", "wikipedia"]} | xlnet/xlnet-large-cased | null | [
"transformers",
"pytorch",
"tf",
"xlnet",
"text-generation",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1906.08237",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1906.08237"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #xlnet #text-generation #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1906.08237 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# XLNet (large-sized model)
XLNet model pre-trained on English language. It was introduced in the paper XLNet: Generalized Autoregressive Pretraining for Language Understanding by Yang et al. and first released in this repository.
Disclaimer: The team releasing XLNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking.
## Intended uses & limitations
The model is mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.
## Usage
Here is how to use this model to get the features of a given text in PyTorch:
### BibTeX entry and citation info
| [
"# XLNet (large-sized model) \n\nXLNet model pre-trained on English language. It was introduced in the paper XLNet: Generalized Autoregressive Pretraining for Language Understanding by Yang et al. and first released in this repository. \n\nDisclaimer: The team releasing XLNet did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nXLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking.",
"## Intended uses & limitations\n\nThe model is mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.",
"## Usage\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #xlnet #text-generation #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1906.08237 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# XLNet (large-sized model) \n\nXLNet model pre-trained on English language. It was introduced in the paper XLNet: Generalized Autoregressive Pretraining for Language Understanding by Yang et al. and first released in this repository. \n\nDisclaimer: The team releasing XLNet did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nXLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking.",
"## Intended uses & limitations\n\nThe model is mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.",
"## Usage\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### BibTeX entry and citation info"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7580
- Matthews Correlation: 0.5406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5307 | 1.0 | 535 | 0.5094 | 0.4152 |
| 0.3545 | 2.0 | 1070 | 0.5230 | 0.4940 |
| 0.2371 | 3.0 | 1605 | 0.6412 | 0.5087 |
| 0.1777 | 4.0 | 2140 | 0.7580 | 0.5406 |
| 0.1288 | 5.0 | 2675 | 0.8494 | 0.5396 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5406394412669151, "name": "Matthews Correlation"}]}]}]} | 09panesara/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7580
* Matthews Correlation: 0.5406
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.14.1
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text2text-generation | transformers | ## keyT5. Base (small) version
[![0x7o - text2keywords](https://img.shields.io/static/v1?label=0x7o&message=text2keywords&color=blue&logo=github)](https://github.com/0x7o/text2keywords "Go to GitHub repo")
[![stars - text2keywords](https://img.shields.io/github/stars/0x7o/text2keywords?style=social)](https://github.com/0x7o/text2keywords)
[![forks - text2keywords](https://img.shields.io/github/forks/0x7o/text2keywords?style=social)](https://github.com/0x7o/text2keywords)
Supported languages: ru
Github - [text2keywords](https://github.com/0x7o/text2keywords)
[Pretraining Large version](https://huggingface.co/0x7194633/keyt5-large)
|
[Pretraining Base version](https://huggingface.co/0x7194633/keyt5-base)
# Usage
Example usage (the code returns a list with keywords. duplicates are possible):
[![Try Model Training In Colab!](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/0x7o/text2keywords/blob/main/example/keyT5_use.ipynb)
```
pip install transformers sentencepiece
```
```python
from itertools import groupby
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
model_name = "0x7194633/keyt5-large" # or 0x7194633/keyt5-base
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
def generate(text, **kwargs):
inputs = tokenizer(text, return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(**inputs, num_beams=5, **kwargs)
s = tokenizer.decode(hypotheses[0], skip_special_tokens=True)
s = s.replace('; ', ';').replace(' ;', ';').lower().split(';')[:-1]
s = [el for el, _ in groupby(s)]
return s
article = """Reuters сообщил об отмене 3,6 тыс. авиарейсов из-за «омикрона» и погоды
Наибольшее число отмен авиарейсов 2 января пришлось на американские авиакомпании
SkyWest и Southwest, у каждой — более 400 отмененных рейсов. При этом среди
отмененных 2 января авиарейсов — более 2,1 тыс. рейсов в США. Также свыше 6400
рейсов были задержаны."""
print(generate(article, top_p=1.0, max_length=64))
# ['авиаперевозки', 'отмена авиарейсов', 'отмена рейсов', 'отмена авиарейсов', 'отмена рейсов', 'отмена авиарейсов']
```
# Training
Go to the training notebook and learn more about it:
[![Try Model Training In Colab!](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/0x7o/text2keywords/blob/main/example/keyT5_train.ipynb)
| {"language": ["ru"], "license": "mit", "inference": {"parameters": {"top_p": 0.9}}, "widget": [{"text": "\u0412 \u0420\u043e\u0441\u0441\u0438\u0438 \u043c\u043e\u0436\u0435\u0442 \u043f\u043e\u044f\u0432\u0438\u0442\u044c\u0441\u044f \u043d\u043e\u0432\u044b\u0439 \u0448\u0442\u0430\u043c\u043c \u043a\u043e\u0440\u043e\u043d\u0430\u0432\u0438\u0440\u0443\u0441\u0430 \u00ab\u043e\u043c\u0438\u043a\u0440\u043e\u043d\u00bb, \u0447\u0442\u043e \u043c\u043e\u0436\u0435\u0442 \u043f\u0440\u0438\u0432\u0435\u0441\u0442\u0438 \u043a \u043f\u043e\u0434\u044a\u0435\u043c\u0443 \u0437\u0430\u0431\u043e\u043b\u0435\u0432\u0430\u0435\u043c\u043e\u0441\u0442\u0438 \u0432 \u044f\u043d\u0432\u0430\u0440\u0435, \u0437\u0430\u044f\u0432\u0438\u043b \u0434\u043e\u0446\u0435\u043d\u0442 \u043a\u0430\u0444\u0435\u0434\u0440\u044b \u0438\u043d\u0444\u0435\u043a\u0446\u0438\u043e\u043d\u043d\u044b\u0445 \u0431\u043e\u043b\u0435\u0437\u043d\u0435\u0439 \u0420\u0423\u0414\u041d \u0421\u0435\u0440\u0433\u0435\u0439 \u0412\u043e\u0437\u043d\u0435\u0441\u0435\u043d\u0441\u043a\u0438\u0439. \u041e\u043d \u043e\u0442\u043c\u0435\u0442\u0438\u043b, \u0447\u0442\u043e \u0432\u0430\u0440\u0438\u0430\u043d\u0442 \u00ab\u0434\u0435\u043b\u044c\u0442\u0430\u00bb \u0432\u044b\u0437\u044b\u0432\u0430\u043b \u0431\u043e\u043b\u044c\u0448\u0435 \u043b\u0435\u0442\u0430\u043b\u044c\u043d\u044b\u0445 \u0441\u043b\u0443\u0447\u0430\u0435\u0432, \u0447\u0435\u043c \u043e\u043c\u0438\u043a\u0440\u043e\u043d, \u0438\u043c\u0435\u043d\u043d\u043e \u043d\u0430 \u0444\u043e\u043d\u0435 \u00ab\u0434\u0435\u043b\u044c\u0442\u044b\u00bb \u0431\u044b\u043b\u0430 \u043c\u0430\u043a\u0441\u0438\u043c\u0430\u043b\u044c\u043d\u0430\u044f \u043b\u0435\u0442\u0430\u043b\u044c\u043d\u043e\u0441\u0442\u044c.", "example_title": "\u041a\u043e\u0440\u043e\u043d\u0430\u0432\u0438\u0440\u0443\u0441"}, {"text": "\u041d\u0430\u0447\u0430\u043b\u044c\u043d\u0438\u043a\u0430 \u0448\u0442\u0430\u0431\u0430 \u043e\u0431\u043e\u0440\u043e\u043d\u044b \u0412\u0435\u043b\u0438\u043a\u043e\u0431\u0440\u0438\u0442\u0430\u043d\u0438\u0438 \u0430\u0434\u043c\u0438\u0440\u0430\u043b\u0430 \u0422\u043e\u043d\u0438 \u0420\u0430\u0434\u0430\u043a\u0438\u043d\u0430 \u0437\u0430\u0441\u0442\u0430\u0432\u0438\u043b\u0438 \u0438\u043c\u0438\u0442\u0438\u0440\u043e\u0432\u0430\u0442\u044c \u0430\u043a\u0442\u0438\u0432\u043d\u043e\u0441\u0442\u044c \u0432\u043e \u0432\u0440\u0435\u043c\u044f \u0432\u0438\u0437\u0438\u0442\u0430 \u0432 \u0430\u043d\u0433\u0430\u0440 \u0441 \u0442\u044f\u0436\u0435\u043b\u044b\u043c \u0432\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435\u043c, \u0441\u043e\u043e\u0431\u0449\u0438\u043b\u0430 \u0431\u0440\u0438\u0442\u0430\u043d\u0441\u043a\u0430\u044f \u043f\u0440\u0435\u0441\u0441\u0430. \u0412 \u043f\u0440\u0438\u043a\u0430\u0437\u0435 \u0433\u043e\u0432\u043e\u0440\u0438\u043b\u043e\u0441\u044c, \u0447\u0442\u043e \u0432\u043e\u0435\u043d\u043d\u043e\u0441\u043b\u0443\u0436\u0430\u0449\u0438\u043c \u0431\u044b\u043b\u043e \u0432\u0435\u043b\u0435\u043d\u043e \u043f\u043e\u0434\u0431\u0435\u0433\u0430\u0442\u044c \u043a \u0430\u0432\u0442\u043e\u043c\u043e\u0431\u0438\u043b\u044f\u043c, \u043e\u0442\u043a\u0440\u044b\u0432\u0430\u0442\u044c \u0432\u0441\u0435 \u043b\u044e\u043a\u0438, \u0437\u0430\u0442\u0432\u043e\u0440\u044b, \u043b\u0438\u0441\u0442\u0430\u0442\u044c \u0440\u0443\u043a\u043e\u0432\u043e\u0434\u0441\u0442\u0432\u043e \u043f\u043e \u044d\u043a\u0441\u043f\u043b\u0443\u0430\u0442\u0430\u0446\u0438\u0438 \u0438 \u043e\u0441\u043c\u0430\u0442\u0440\u0438\u0432\u0430\u0442\u044c\u0441\u044f \u043c\u0430\u0448\u0438\u043d\u044b, \u0431\u0443\u0434\u0442\u043e \u043f\u0440\u043e\u0432\u043e\u0434\u0438\u0442\u0441\u044f \u0444\u0443\u043d\u043a\u0446\u0438\u043e\u043d\u0430\u043b\u044c\u043d\u044b\u0439 \u0442\u0435\u0441\u0442 \u0434\u043b\u044f \u043e\u0431\u0435\u0441\u043f\u0435\u0447\u0435\u043d\u0438\u044f \u043f\u0440\u0430\u0432\u0438\u043b\u044c\u043d\u043e\u0439 \u0440\u0430\u0431\u043e\u0442\u044b \u043e\u0431\u043e\u0440\u0443\u0434\u043e\u0432\u0430\u043d\u0438\u044f.", "example_title": "\u0411\u0440\u0438\u0442\u0430\u043d\u0438\u044f"}, {"text": "\u0414\u043b\u044f \u0432\u043e\u0441\u043f\u0440\u043e\u0438\u0437\u0432\u0435\u0434\u0435\u043d\u0438\u044f \u043c\u0443\u0437\u044b\u043a\u0438 \u0434\u043e\u0441\u0442\u0430\u0442\u043e\u0447\u043d\u043e \u043d\u0430\u0436\u0438\u043c\u0430\u0442\u044c \u043d\u0430 \u043a\u043d\u043e\u043f\u043a\u0438 \u043a\u043b\u0430\u0432\u0438\u0430\u0442\u0443\u0440\u044b. \u041a\u0430\u0436\u0434\u043e\u0439 \u043a\u043b\u0430\u0432\u0438\u0448\u0435 \u0441\u043e\u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0443\u0435\u0442 \u043e\u043f\u0440\u0435\u0434\u0435\u043b\u0435\u043d\u043d\u044b\u0439 \u0441\u0435\u043c\u043f\u043b \u2014 \u0435\u0441\u0442\u044c \u043c\u0430\u0440\u0430\u043a\u0430\u0441\u044b \u0438 \u0444\u0443\u0442\u0443\u0440\u0438\u0441\u0442\u0438\u0447\u043d\u044b\u0435 \u0437\u0432\u0443\u043a\u0438, \u043d\u0430\u043f\u043e\u043c\u0438\u043d\u0430\u044e\u0449\u0438\u0435 \u0432\u044b\u0441\u0442\u0440\u0435\u043b\u044b \u0431\u043b\u0430\u0441\u0442\u0435\u0440\u043e\u0432. \u0418\u0437 \u0432\u0441\u0435\u0433\u043e \u043c\u043d\u043e\u0433\u043e\u043e\u0431\u0440\u0430\u0437\u0438\u044f \u043c\u043e\u0436\u043d\u043e \u0444\u043e\u0440\u043c\u0438\u0440\u043e\u0432\u0430\u0442\u044c \u0441\u043e\u0431\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u0435 \u043f\u0430\u0442\u0442\u0435\u0440\u043d\u044b \u0438 \u043d\u0430\u0431\u043b\u044e\u0434\u0430\u0442\u044c \u0437\u0430 \u0432\u0438\u0437\u0443\u0430\u043b\u0438\u0437\u0430\u0446\u0438\u0435\u0439 \u0441 \u0430\u043d\u0438\u043c\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u044b\u043c\u0438 \u0433\u0435\u043e\u043c\u0435\u0442\u0440\u0438\u0447\u0435\u0441\u043a\u0438\u043c\u0438 \u0444\u0438\u0433\u0443\u0440\u0430\u043c\u0438. \u0427\u0442\u043e \u0438\u043d\u0442\u0435\u0440\u0435\u0441\u043d\u043e, \u043d\u0430\u0436\u0430\u0442\u0438\u0435\u043c \u043a\u043b\u0430\u0432\u0438\u0448\u0438 \u043f\u0440\u043e\u0431\u0435\u043b \u043c\u043e\u0436\u043d\u043e \u043f\u043e\u043b\u043d\u043e\u0441\u0442\u044c\u044e \u043f\u0435\u0440\u0435\u043c\u0435\u043d\u0438\u0442\u044c \u043e\u0444\u043e\u0440\u043c\u043b\u0435\u043d\u0438\u0435, \u0446\u0432\u0435\u0442\u0430 \u043d\u0430 \u044d\u043a\u0440\u0430\u043d\u0435 \u0438 \u0437\u0432\u0443\u0447\u0430\u043d\u0438\u0435 \u0441\u0435\u043c\u043f\u043b\u043e\u0432.", "example_title": "\u0422\u0435\u0445\u043d\u043e\u043b\u043e\u0433\u0438\u0438"}]} | 0x7o/keyt5-base | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru"
] | TAGS
#transformers #pytorch #t5 #text2text-generation #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| ## keyT5. Base (small) version
![0x7o - text2keywords](URL "Go to GitHub repo")
![stars - text2keywords](URL
![forks - text2keywords](URL
Supported languages: ru
Github - text2keywords
Pretraining Large version
|
Pretraining Base version
# Usage
Example usage (the code returns a list with keywords. duplicates are possible):
![Try Model Training In Colab!](URL
# Training
Go to the training notebook and learn more about it:
![Try Model Training In Colab!](URL
| [
"## keyT5. Base (small) version\n![0x7o - text2keywords](URL \"Go to GitHub repo\")\n![stars - text2keywords](URL\n![forks - text2keywords](URL\n\nSupported languages: ru\n\nGithub - text2keywords\n\n\nPretraining Large version\n|\nPretraining Base version",
"# Usage\nExample usage (the code returns a list with keywords. duplicates are possible):\n\n![Try Model Training In Colab!](URL",
"# Training\nGo to the training notebook and learn more about it:\n\n![Try Model Training In Colab!](URL"
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## keyT5. Base (small) version\n![0x7o - text2keywords](URL \"Go to GitHub repo\")\n![stars - text2keywords](URL\n![forks - text2keywords](URL\n\nSupported languages: ru\n\nGithub - text2keywords\n\n\nPretraining Large version\n|\nPretraining Base version",
"# Usage\nExample usage (the code returns a list with keywords. duplicates are possible):\n\n![Try Model Training In Colab!](URL",
"# Training\nGo to the training notebook and learn more about it:\n\n![Try Model Training In Colab!](URL"
] |
text2text-generation | transformers | ## keyT5. Large version
[![0x7o - text2keywords](https://img.shields.io/static/v1?label=0x7o&message=text2keywords&color=blue&logo=github)](https://github.com/0x7o/text2keywords "Go to GitHub repo")
[![stars - text2keywords](https://img.shields.io/github/stars/0x7o/text2keywords?style=social)](https://github.com/0x7o/text2keywords)
[![forks - text2keywords](https://img.shields.io/github/forks/0x7o/text2keywords?style=social)](https://github.com/0x7o/text2keywords)
Supported languages: ru
Github - [text2keywords](https://github.com/0x7o/text2keywords)
[Pretraining Large version](https://huggingface.co/0x7194633/keyt5-large)
|
[Pretraining Base version](https://huggingface.co/0x7194633/keyt5-base)
# Usage
Example usage (the code returns a list with keywords. duplicates are possible):
[![Try Model Training In Colab!](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/0x7o/text2keywords/blob/main/example/keyT5_use.ipynb)
```
pip install transformers sentencepiece
```
```python
from itertools import groupby
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
model_name = "0x7194633/keyt5-large" # or 0x7194633/keyt5-base
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
def generate(text, **kwargs):
inputs = tokenizer(text, return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(**inputs, num_beams=5, **kwargs)
s = tokenizer.decode(hypotheses[0], skip_special_tokens=True)
s = s.replace('; ', ';').replace(' ;', ';').lower().split(';')[:-1]
s = [el for el, _ in groupby(s)]
return s
article = """Reuters сообщил об отмене 3,6 тыс. авиарейсов из-за «омикрона» и погоды
Наибольшее число отмен авиарейсов 2 января пришлось на американские авиакомпании
SkyWest и Southwest, у каждой — более 400 отмененных рейсов. При этом среди
отмененных 2 января авиарейсов — более 2,1 тыс. рейсов в США. Также свыше 6400
рейсов были задержаны."""
print(generate(article, top_p=1.0, max_length=64))
# ['авиаперевозки', 'отмена авиарейсов', 'отмена рейсов', 'отмена авиарейсов', 'отмена рейсов', 'отмена авиарейсов']
```
# Training
Go to the training notebook and learn more about it:
[![Try Model Training In Colab!](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/0x7o/text2keywords/blob/main/example/keyT5_train.ipynb)
| {"language": ["ru"], "license": "mit", "inference": {"parameters": {"top_p": 1.0}}, "widget": [{"text": "\u0412 \u0420\u043e\u0441\u0441\u0438\u0438 \u043c\u043e\u0436\u0435\u0442 \u043f\u043e\u044f\u0432\u0438\u0442\u044c\u0441\u044f \u043d\u043e\u0432\u044b\u0439 \u0448\u0442\u0430\u043c\u043c \u043a\u043e\u0440\u043e\u043d\u0430\u0432\u0438\u0440\u0443\u0441\u0430 \u00ab\u043e\u043c\u0438\u043a\u0440\u043e\u043d\u00bb, \u0447\u0442\u043e \u043c\u043e\u0436\u0435\u0442 \u043f\u0440\u0438\u0432\u0435\u0441\u0442\u0438 \u043a \u043f\u043e\u0434\u044a\u0435\u043c\u0443 \u0437\u0430\u0431\u043e\u043b\u0435\u0432\u0430\u0435\u043c\u043e\u0441\u0442\u0438 \u0432 \u044f\u043d\u0432\u0430\u0440\u0435, \u0437\u0430\u044f\u0432\u0438\u043b \u0434\u043e\u0446\u0435\u043d\u0442 \u043a\u0430\u0444\u0435\u0434\u0440\u044b \u0438\u043d\u0444\u0435\u043a\u0446\u0438\u043e\u043d\u043d\u044b\u0445 \u0431\u043e\u043b\u0435\u0437\u043d\u0435\u0439 \u0420\u0423\u0414\u041d \u0421\u0435\u0440\u0433\u0435\u0439 \u0412\u043e\u0437\u043d\u0435\u0441\u0435\u043d\u0441\u043a\u0438\u0439. \u041e\u043d \u043e\u0442\u043c\u0435\u0442\u0438\u043b, \u0447\u0442\u043e \u0432\u0430\u0440\u0438\u0430\u043d\u0442 \u00ab\u0434\u0435\u043b\u044c\u0442\u0430\u00bb \u0432\u044b\u0437\u044b\u0432\u0430\u043b \u0431\u043e\u043b\u044c\u0448\u0435 \u043b\u0435\u0442\u0430\u043b\u044c\u043d\u044b\u0445 \u0441\u043b\u0443\u0447\u0430\u0435\u0432, \u0447\u0435\u043c \u043e\u043c\u0438\u043a\u0440\u043e\u043d, \u0438\u043c\u0435\u043d\u043d\u043e \u043d\u0430 \u0444\u043e\u043d\u0435 \u00ab\u0434\u0435\u043b\u044c\u0442\u044b\u00bb \u0431\u044b\u043b\u0430 \u043c\u0430\u043a\u0441\u0438\u043c\u0430\u043b\u044c\u043d\u0430\u044f \u043b\u0435\u0442\u0430\u043b\u044c\u043d\u043e\u0441\u0442\u044c.", "example_title": "\u041a\u043e\u0440\u043e\u043d\u0430\u0432\u0438\u0440\u0443\u0441"}, {"text": "\u041d\u0430\u0447\u0430\u043b\u044c\u043d\u0438\u043a\u0430 \u0448\u0442\u0430\u0431\u0430 \u043e\u0431\u043e\u0440\u043e\u043d\u044b \u0412\u0435\u043b\u0438\u043a\u043e\u0431\u0440\u0438\u0442\u0430\u043d\u0438\u0438 \u0430\u0434\u043c\u0438\u0440\u0430\u043b\u0430 \u0422\u043e\u043d\u0438 \u0420\u0430\u0434\u0430\u043a\u0438\u043d\u0430 \u0437\u0430\u0441\u0442\u0430\u0432\u0438\u043b\u0438 \u0438\u043c\u0438\u0442\u0438\u0440\u043e\u0432\u0430\u0442\u044c \u0430\u043a\u0442\u0438\u0432\u043d\u043e\u0441\u0442\u044c \u0432\u043e \u0432\u0440\u0435\u043c\u044f \u0432\u0438\u0437\u0438\u0442\u0430 \u0432 \u0430\u043d\u0433\u0430\u0440 \u0441 \u0442\u044f\u0436\u0435\u043b\u044b\u043c \u0432\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435\u043c, \u0441\u043e\u043e\u0431\u0449\u0438\u043b\u0430 \u0431\u0440\u0438\u0442\u0430\u043d\u0441\u043a\u0430\u044f \u043f\u0440\u0435\u0441\u0441\u0430. \u0412 \u043f\u0440\u0438\u043a\u0430\u0437\u0435 \u0433\u043e\u0432\u043e\u0440\u0438\u043b\u043e\u0441\u044c, \u0447\u0442\u043e \u0432\u043e\u0435\u043d\u043d\u043e\u0441\u043b\u0443\u0436\u0430\u0449\u0438\u043c \u0431\u044b\u043b\u043e \u0432\u0435\u043b\u0435\u043d\u043e \u043f\u043e\u0434\u0431\u0435\u0433\u0430\u0442\u044c \u043a \u0430\u0432\u0442\u043e\u043c\u043e\u0431\u0438\u043b\u044f\u043c, \u043e\u0442\u043a\u0440\u044b\u0432\u0430\u0442\u044c \u0432\u0441\u0435 \u043b\u044e\u043a\u0438, \u0437\u0430\u0442\u0432\u043e\u0440\u044b, \u043b\u0438\u0441\u0442\u0430\u0442\u044c \u0440\u0443\u043a\u043e\u0432\u043e\u0434\u0441\u0442\u0432\u043e \u043f\u043e \u044d\u043a\u0441\u043f\u043b\u0443\u0430\u0442\u0430\u0446\u0438\u0438 \u0438 \u043e\u0441\u043c\u0430\u0442\u0440\u0438\u0432\u0430\u0442\u044c\u0441\u044f \u043c\u0430\u0448\u0438\u043d\u044b, \u0431\u0443\u0434\u0442\u043e \u043f\u0440\u043e\u0432\u043e\u0434\u0438\u0442\u0441\u044f \u0444\u0443\u043d\u043a\u0446\u0438\u043e\u043d\u0430\u043b\u044c\u043d\u044b\u0439 \u0442\u0435\u0441\u0442 \u0434\u043b\u044f \u043e\u0431\u0435\u0441\u043f\u0435\u0447\u0435\u043d\u0438\u044f \u043f\u0440\u0430\u0432\u0438\u043b\u044c\u043d\u043e\u0439 \u0440\u0430\u0431\u043e\u0442\u044b \u043e\u0431\u043e\u0440\u0443\u0434\u043e\u0432\u0430\u043d\u0438\u044f.", "example_title": "\u0411\u0440\u0438\u0442\u0430\u043d\u0438\u044f"}, {"text": "\u0414\u043b\u044f \u0432\u043e\u0441\u043f\u0440\u043e\u0438\u0437\u0432\u0435\u0434\u0435\u043d\u0438\u044f \u043c\u0443\u0437\u044b\u043a\u0438 \u0434\u043e\u0441\u0442\u0430\u0442\u043e\u0447\u043d\u043e \u043d\u0430\u0436\u0438\u043c\u0430\u0442\u044c \u043d\u0430 \u043a\u043d\u043e\u043f\u043a\u0438 \u043a\u043b\u0430\u0432\u0438\u0430\u0442\u0443\u0440\u044b. \u041a\u0430\u0436\u0434\u043e\u0439 \u043a\u043b\u0430\u0432\u0438\u0448\u0435 \u0441\u043e\u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0443\u0435\u0442 \u043e\u043f\u0440\u0435\u0434\u0435\u043b\u0435\u043d\u043d\u044b\u0439 \u0441\u0435\u043c\u043f\u043b \u2014 \u0435\u0441\u0442\u044c \u043c\u0430\u0440\u0430\u043a\u0430\u0441\u044b \u0438 \u0444\u0443\u0442\u0443\u0440\u0438\u0441\u0442\u0438\u0447\u043d\u044b\u0435 \u0437\u0432\u0443\u043a\u0438, \u043d\u0430\u043f\u043e\u043c\u0438\u043d\u0430\u044e\u0449\u0438\u0435 \u0432\u044b\u0441\u0442\u0440\u0435\u043b\u044b \u0431\u043b\u0430\u0441\u0442\u0435\u0440\u043e\u0432. \u0418\u0437 \u0432\u0441\u0435\u0433\u043e \u043c\u043d\u043e\u0433\u043e\u043e\u0431\u0440\u0430\u0437\u0438\u044f \u043c\u043e\u0436\u043d\u043e \u0444\u043e\u0440\u043c\u0438\u0440\u043e\u0432\u0430\u0442\u044c \u0441\u043e\u0431\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u0435 \u043f\u0430\u0442\u0442\u0435\u0440\u043d\u044b \u0438 \u043d\u0430\u0431\u043b\u044e\u0434\u0430\u0442\u044c \u0437\u0430 \u0432\u0438\u0437\u0443\u0430\u043b\u0438\u0437\u0430\u0446\u0438\u0435\u0439 \u0441 \u0430\u043d\u0438\u043c\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u044b\u043c\u0438 \u0433\u0435\u043e\u043c\u0435\u0442\u0440\u0438\u0447\u0435\u0441\u043a\u0438\u043c\u0438 \u0444\u0438\u0433\u0443\u0440\u0430\u043c\u0438. \u0427\u0442\u043e \u0438\u043d\u0442\u0435\u0440\u0435\u0441\u043d\u043e, \u043d\u0430\u0436\u0430\u0442\u0438\u0435\u043c \u043a\u043b\u0430\u0432\u0438\u0448\u0438 \u043f\u0440\u043e\u0431\u0435\u043b \u043c\u043e\u0436\u043d\u043e \u043f\u043e\u043b\u043d\u043e\u0441\u0442\u044c\u044e \u043f\u0435\u0440\u0435\u043c\u0435\u043d\u0438\u0442\u044c \u043e\u0444\u043e\u0440\u043c\u043b\u0435\u043d\u0438\u0435, \u0446\u0432\u0435\u0442\u0430 \u043d\u0430 \u044d\u043a\u0440\u0430\u043d\u0435 \u0438 \u0437\u0432\u0443\u0447\u0430\u043d\u0438\u0435 \u0441\u0435\u043c\u043f\u043b\u043e\u0432.", "example_title": "\u0422\u0435\u0445\u043d\u043e\u043b\u043e\u0433\u0438\u0438"}]} | 0x7o/keyt5-large | null | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru"
] | TAGS
#transformers #pytorch #safetensors #t5 #text2text-generation #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| ## keyT5. Large version
![0x7o - text2keywords](URL "Go to GitHub repo")
![stars - text2keywords](URL
![forks - text2keywords](URL
Supported languages: ru
Github - text2keywords
Pretraining Large version
|
Pretraining Base version
# Usage
Example usage (the code returns a list with keywords. duplicates are possible):
![Try Model Training In Colab!](URL
# Training
Go to the training notebook and learn more about it:
![Try Model Training In Colab!](URL
| [
"## keyT5. Large version\n![0x7o - text2keywords](URL \"Go to GitHub repo\")\n![stars - text2keywords](URL\n![forks - text2keywords](URL\n\nSupported languages: ru\n\nGithub - text2keywords\n\n\nPretraining Large version\n|\nPretraining Base version",
"# Usage\nExample usage (the code returns a list with keywords. duplicates are possible):\n\n![Try Model Training In Colab!](URL",
"# Training\nGo to the training notebook and learn more about it:\n\n![Try Model Training In Colab!](URL"
] | [
"TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## keyT5. Large version\n![0x7o - text2keywords](URL \"Go to GitHub repo\")\n![stars - text2keywords](URL\n![forks - text2keywords](URL\n\nSupported languages: ru\n\nGithub - text2keywords\n\n\nPretraining Large version\n|\nPretraining Base version",
"# Usage\nExample usage (the code returns a list with keywords. duplicates are possible):\n\n![Try Model Training In Colab!](URL",
"# Training\nGo to the training notebook and learn more about it:\n\n![Try Model Training In Colab!](URL"
] |
text-generation | transformers |
# Rick n Morty DialoGPT Model | {"tags": ["conversational"]} | 0xDEADBEA7/DialoGPT-small-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Rick n Morty DialoGPT Model | [
"# Rick n Morty DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Rick n Morty DialoGPT Model"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8628
- Matthews Correlation: 0.5331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5253 | 1.0 | 535 | 0.5214 | 0.3943 |
| 0.3459 | 2.0 | 1070 | 0.5551 | 0.4693 |
| 0.2326 | 3.0 | 1605 | 0.6371 | 0.5059 |
| 0.1718 | 4.0 | 2140 | 0.7851 | 0.5111 |
| 0.1262 | 5.0 | 2675 | 0.8628 | 0.5331 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model_index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metric": {"name": "Matthews Correlation", "type": "matthews_correlation", "value": 0.5331291095663535}}]}]} | 123abhiALFLKFO/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8628
* Matthews Correlation: 0.5331
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.9.1
* Pytorch 1.9.0+cu102
* Datasets 1.11.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
#Jake Peralta DialoGPT Model | {"tags": ["conversational"]} | 1Basco/DialoGPT-small-jake | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Jake Peralta DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6259
- Wer: 0.3544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.6744 | 0.5 | 500 | 2.9473 | 1.0 |
| 1.4535 | 1.01 | 1000 | 0.7774 | 0.6254 |
| 0.7376 | 1.51 | 1500 | 0.6923 | 0.5712 |
| 0.5848 | 2.01 | 2000 | 0.5445 | 0.5023 |
| 0.4492 | 2.51 | 2500 | 0.5148 | 0.4958 |
| 0.4006 | 3.02 | 3000 | 0.5283 | 0.4781 |
| 0.3319 | 3.52 | 3500 | 0.5196 | 0.4628 |
| 0.3424 | 4.02 | 4000 | 0.5285 | 0.4551 |
| 0.2772 | 4.52 | 4500 | 0.5060 | 0.4532 |
| 0.2724 | 5.03 | 5000 | 0.5216 | 0.4422 |
| 0.2375 | 5.53 | 5500 | 0.5376 | 0.4443 |
| 0.2279 | 6.03 | 6000 | 0.6051 | 0.4308 |
| 0.2091 | 6.53 | 6500 | 0.5084 | 0.4423 |
| 0.2029 | 7.04 | 7000 | 0.5083 | 0.4242 |
| 0.1784 | 7.54 | 7500 | 0.6123 | 0.4297 |
| 0.1774 | 8.04 | 8000 | 0.5749 | 0.4339 |
| 0.1542 | 8.54 | 8500 | 0.5110 | 0.4033 |
| 0.1638 | 9.05 | 9000 | 0.6324 | 0.4318 |
| 0.1493 | 9.55 | 9500 | 0.6100 | 0.4152 |
| 0.1591 | 10.05 | 10000 | 0.5508 | 0.4022 |
| 0.1304 | 10.55 | 10500 | 0.5090 | 0.4054 |
| 0.1234 | 11.06 | 11000 | 0.6282 | 0.4093 |
| 0.1218 | 11.56 | 11500 | 0.5817 | 0.3941 |
| 0.121 | 12.06 | 12000 | 0.5741 | 0.3999 |
| 0.1073 | 12.56 | 12500 | 0.5818 | 0.4149 |
| 0.104 | 13.07 | 13000 | 0.6492 | 0.3953 |
| 0.0934 | 13.57 | 13500 | 0.5393 | 0.4083 |
| 0.0961 | 14.07 | 14000 | 0.5510 | 0.3919 |
| 0.0965 | 14.57 | 14500 | 0.5896 | 0.3992 |
| 0.0921 | 15.08 | 15000 | 0.5554 | 0.3947 |
| 0.0751 | 15.58 | 15500 | 0.6312 | 0.3934 |
| 0.0805 | 16.08 | 16000 | 0.6732 | 0.3948 |
| 0.0742 | 16.58 | 16500 | 0.5990 | 0.3884 |
| 0.0708 | 17.09 | 17000 | 0.6186 | 0.3869 |
| 0.0679 | 17.59 | 17500 | 0.5837 | 0.3848 |
| 0.072 | 18.09 | 18000 | 0.5831 | 0.3775 |
| 0.0597 | 18.59 | 18500 | 0.6562 | 0.3843 |
| 0.0612 | 19.1 | 19000 | 0.6298 | 0.3756 |
| 0.0514 | 19.6 | 19500 | 0.6746 | 0.3720 |
| 0.061 | 20.1 | 20000 | 0.6236 | 0.3788 |
| 0.054 | 20.6 | 20500 | 0.6012 | 0.3718 |
| 0.0521 | 21.11 | 21000 | 0.6053 | 0.3778 |
| 0.0494 | 21.61 | 21500 | 0.6154 | 0.3772 |
| 0.0468 | 22.11 | 22000 | 0.6052 | 0.3747 |
| 0.0413 | 22.61 | 22500 | 0.5877 | 0.3716 |
| 0.0424 | 23.12 | 23000 | 0.5786 | 0.3658 |
| 0.0403 | 23.62 | 23500 | 0.5828 | 0.3658 |
| 0.0391 | 24.12 | 24000 | 0.5913 | 0.3685 |
| 0.0312 | 24.62 | 24500 | 0.5850 | 0.3625 |
| 0.0316 | 25.13 | 25000 | 0.6029 | 0.3611 |
| 0.0282 | 25.63 | 25500 | 0.6312 | 0.3624 |
| 0.0328 | 26.13 | 26000 | 0.6312 | 0.3621 |
| 0.0258 | 26.63 | 26500 | 0.5891 | 0.3581 |
| 0.0256 | 27.14 | 27000 | 0.6259 | 0.3546 |
| 0.0255 | 27.64 | 27500 | 0.6315 | 0.3587 |
| 0.0249 | 28.14 | 28000 | 0.6547 | 0.3579 |
| 0.025 | 28.64 | 28500 | 0.6237 | 0.3565 |
| 0.0228 | 29.15 | 29000 | 0.6187 | 0.3559 |
| 0.0209 | 29.65 | 29500 | 0.6259 | 0.3544 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]} | 202015004/wav2vec2-base-timit-demo-colab | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| wav2vec2-base-timit-demo-colab
==============================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6259
* Wer: 0.3544
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu102
* Datasets 1.18.3
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
# Deadpool DialoGPT Model | {"tags": ["conversational"]} | 2early4coffee/DialoGPT-medium-deadpool | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Deadpool DialoGPT Model | [
"# Deadpool DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Deadpool DialoGPT Model"
] |
text-generation | transformers |
# Deadpool DialoGPT Model | {"tags": ["conversational"]} | 2early4coffee/DialoGPT-small-deadpool | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Deadpool DialoGPT Model | [
"# Deadpool DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Deadpool DialoGPT Model"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7816
- Matthews Correlation: 0.5156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5291 | 1.0 | 535 | 0.5027 | 0.4092 |
| 0.3492 | 2.0 | 1070 | 0.5136 | 0.4939 |
| 0.2416 | 3.0 | 1605 | 0.6390 | 0.5056 |
| 0.1794 | 4.0 | 2140 | 0.7816 | 0.5156 |
| 0.1302 | 5.0 | 2675 | 0.8836 | 0.5156 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5155709926752544, "name": "Matthews Correlation"}]}]}]} | 2umm3r/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7816
* Matthews Correlation: 0.5156
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
feature-extraction | transformers | this is a fine tuned GPT2 text generation model on a Hunter x Hunter TV anime series dataset.\
you can find a link to the used dataset here : https://www.kaggle.com/bkoozy/hunter-x-hunter-subtitles
you can find a colab notebook for fine-tuning the gpt2 model here : https://github.com/3koozy/fine-tune-gpt2-HxH/ | {} | 3koozy/gpt2-HxH | null | [
"transformers",
"pytorch",
"gpt2",
"feature-extraction",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #feature-extraction #endpoints_compatible #text-generation-inference #region-us
| this is a fine tuned GPT2 text generation model on a Hunter x Hunter TV anime series dataset.\
you can find a link to the used dataset here : URL
you can find a colab notebook for fine-tuning the gpt2 model here : URL | [] | [
"TAGS\n#transformers #pytorch #gpt2 #feature-extraction #endpoints_compatible #text-generation-inference #region-us \n"
] |
token-classification | transformers |
## Model description
This model is a fine-tuned version of macbert for the purpose of spell checking in medical application scenarios. We fine-tuned macbert Chinese base version on a 300M dataset including 60K+ authorized medical articles. We proposed to randomly confuse 30% sentences of these articles by adding noise with a either visually or phonologically resembled characters. Consequently, the fine-tuned model can achieve 96% accuracy on our test dataset.
## Intended uses & limitations
You can use this model directly with a pipeline for token classification:
```python
>>> from transformers import (AutoModelForTokenClassification, AutoTokenizer)
>>> from transformers import pipeline
>>> hub_model_id = "9pinus/macbert-base-chinese-medical-collation"
>>> model = AutoModelForTokenClassification.from_pretrained(hub_model_id)
>>> tokenizer = AutoTokenizer.from_pretrained(hub_model_id)
>>> classifier = pipeline('ner', model=model, tokenizer=tokenizer)
>>> result = classifier("如果病情较重,可适当口服甲肖唑片、环酯红霉素片等药物进行抗感染镇痛。")
>>> for item in result:
>>> if item['entity'] == 1:
>>> print(item)
{'entity': 1, 'score': 0.58127016, 'index': 14, 'word': '肖', 'start': 13, 'end': 14}
```
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"language": "zh", "license": "apache-2.0", "tags": ["Token Classification"], "metrics": ["precision", "recall", "f1", "accuracy"]} | 9pinus/macbert-base-chinese-medical-collation | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"Token Classification",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh"
] | TAGS
#transformers #pytorch #bert #token-classification #Token Classification #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
## Model description
This model is a fine-tuned version of macbert for the purpose of spell checking in medical application scenarios. We fine-tuned macbert Chinese base version on a 300M dataset including 60K+ authorized medical articles. We proposed to randomly confuse 30% sentences of these articles by adding noise with a either visually or phonologically resembled characters. Consequently, the fine-tuned model can achieve 96% accuracy on our test dataset.
## Intended uses & limitations
You can use this model directly with a pipeline for token classification:
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
| [
"## Model description\r\n\r\nThis model is a fine-tuned version of macbert for the purpose of spell checking in medical application scenarios. We fine-tuned macbert Chinese base version on a 300M dataset including 60K+ authorized medical articles. We proposed to randomly confuse 30% sentences of these articles by adding noise with a either visually or phonologically resembled characters. Consequently, the fine-tuned model can achieve 96% accuracy on our test dataset.",
"## Intended uses & limitations\r\n\r\nYou can use this model directly with a pipeline for token classification:",
"### Framework versions\r\n\r\n- Transformers 4.15.0\r\n- Pytorch 1.10.1+cu113\r\n- Datasets 1.17.0\r\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #bert #token-classification #Token Classification #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## Model description\r\n\r\nThis model is a fine-tuned version of macbert for the purpose of spell checking in medical application scenarios. We fine-tuned macbert Chinese base version on a 300M dataset including 60K+ authorized medical articles. We proposed to randomly confuse 30% sentences of these articles by adding noise with a either visually or phonologically resembled characters. Consequently, the fine-tuned model can achieve 96% accuracy on our test dataset.",
"## Intended uses & limitations\r\n\r\nYou can use this model directly with a pipeline for token classification:",
"### Framework versions\r\n\r\n- Transformers 4.15.0\r\n- Pytorch 1.10.1+cu113\r\n- Datasets 1.17.0\r\n- Tokenizers 0.10.3"
] |
token-classification | transformers |
## Model description
This model is a fine-tuned version of bert-base-chinese for the purpose of medicine name recognition. We fine-tuned bert-base-chinese on a 500M dataset including 100K+ authorized medical articles on which we labeled all the medicine names. The model achieves 92% accuracy on our test dataset.
## Intended use
```python
>>> from transformers import (AutoModelForTokenClassification, AutoTokenizer)
>>> from transformers import pipeline
>>> hub_model_id = "9pinus/macbert-base-chinese-medicine-recognition"
>>> model = AutoModelForTokenClassification.from_pretrained(hub_model_id)
>>> tokenizer = AutoTokenizer.from_pretrained(hub_model_id)
>>> classifier = pipeline('ner', model=model, tokenizer=tokenizer)
>>> result = classifier("如果病情较重,可适当口服甲硝唑片、环酯红霉素片、吲哚美辛片等药物进行抗感染镇痛。")
>>> for item in result:
>>> if item['entity'] == 1 or item['entity'] == 2:
>>> print(item)
{'entity': 1, 'score': 0.99999595, 'index': 13, 'word': '甲', 'start': 12, 'end': 13}
{'entity': 2, 'score': 0.9999957, 'index': 14, 'word': '硝', 'start': 13, 'end': 14}
{'entity': 2, 'score': 0.99999166, 'index': 15, 'word': '唑', 'start': 14, 'end': 15}
{'entity': 2, 'score': 0.99898833, 'index': 16, 'word': '片', 'start': 15, 'end': 16}
{'entity': 1, 'score': 0.9999864, 'index': 18, 'word': '环', 'start': 17, 'end': 18}
{'entity': 2, 'score': 0.99999404, 'index': 19, 'word': '酯', 'start': 18, 'end': 19}
{'entity': 2, 'score': 0.99999475, 'index': 20, 'word': '红', 'start': 19, 'end': 20}
{'entity': 2, 'score': 0.9999964, 'index': 21, 'word': '霉', 'start': 20, 'end': 21}
{'entity': 2, 'score': 0.9999951, 'index': 22, 'word': '素', 'start': 21, 'end': 22}
{'entity': 2, 'score': 0.9990088, 'index': 23, 'word': '片', 'start': 22, 'end': 23}
{'entity': 1, 'score': 0.9999975, 'index': 25, 'word': '吲', 'start': 24, 'end': 25}
{'entity': 2, 'score': 0.9999957, 'index': 26, 'word': '哚', 'start': 25, 'end': 26}
{'entity': 2, 'score': 0.9999945, 'index': 27, 'word': '美', 'start': 26, 'end': 27}
{'entity': 2, 'score': 0.9999933, 'index': 28, 'word': '辛', 'start': 27, 'end': 28}
{'entity': 2, 'score': 0.99949837, 'index': 29, 'word': '片', 'start': 28, 'end': 29}
```
## Training and evaluation data
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"language": ["zh"], "license": "apache-2.0", "tags": ["Token Classification"]} | 9pinus/macbert-base-chinese-medicine-recognition | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"Token Classification",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh"
] | TAGS
#transformers #pytorch #bert #token-classification #Token Classification #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
## Model description
This model is a fine-tuned version of bert-base-chinese for the purpose of medicine name recognition. We fine-tuned bert-base-chinese on a 500M dataset including 100K+ authorized medical articles on which we labeled all the medicine names. The model achieves 92% accuracy on our test dataset.
## Intended use
## Training and evaluation data
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
| [
"## Model description\r\nThis model is a fine-tuned version of bert-base-chinese for the purpose of medicine name recognition. We fine-tuned bert-base-chinese on a 500M dataset including 100K+ authorized medical articles on which we labeled all the medicine names. The model achieves 92% accuracy on our test dataset.",
"## Intended use",
"## Training and evaluation data",
"### Framework versions\r\n\r\n- Transformers 4.15.0\r\n- Pytorch 1.10.1+cu113\r\n- Datasets 1.17.0\r\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #bert #token-classification #Token Classification #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## Model description\r\nThis model is a fine-tuned version of bert-base-chinese for the purpose of medicine name recognition. We fine-tuned bert-base-chinese on a 500M dataset including 100K+ authorized medical articles on which we labeled all the medicine names. The model achieves 92% accuracy on our test dataset.",
"## Intended use",
"## Training and evaluation data",
"### Framework versions\r\n\r\n- Transformers 4.15.0\r\n- Pytorch 1.10.1+cu113\r\n- Datasets 1.17.0\r\n- Tokenizers 0.10.3"
] |
text-classification | transformers |
bert-base-cased model trained on quora question pair dataset. The task requires to predict whether the two given sentences (or questions) are `not_duplicate` (label 0) or `duplicate` (label 1). The model achieves 89% evaluation accuracy
| {"datasets": ["qqp"], "inference": false} | A-bhimany-u08/bert-base-cased-qqp | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"dataset:qqp",
"autotrain_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #text-classification #dataset-qqp #autotrain_compatible #region-us
|
bert-base-cased model trained on quora question pair dataset. The task requires to predict whether the two given sentences (or questions) are 'not_duplicate' (label 0) or 'duplicate' (label 1). The model achieves 89% evaluation accuracy
| [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #dataset-qqp #autotrain_compatible #region-us \n"
] |
text-generation | transformers |
@Harry Potter DialoGPT model | {"tags": ["conversational"]} | ABBHISHEK/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
@Harry Potter DialoGPT model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
feature-extraction | transformers | Pre trained on clus_ chapter only. | {} | AG/pretraining | null | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #feature-extraction #endpoints_compatible #region-us
| Pre trained on clus_ chapter only. | [] | [
"TAGS\n#transformers #pytorch #roberta #feature-extraction #endpoints_compatible #region-us \n"
] |
sentence-similarity | sentence-transformers |
# PatentSBERTa
## PatentSBERTa: A Deep NLP based Hybrid Model for Patent Distance and Classification using Augmented SBERT
### Aalborg University Business School, AI: Growth-Lab
https://arxiv.org/abs/2103.11933
https://github.com/AI-Growth-Lab/PatentSBERTa
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('AI-Growth-Lab/PatentSBERTa')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('AI-Growth-Lab/PatentSBERTa')
model = AutoModel.from_pretrained('AI-Growth-Lab/PatentSBERTa')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```LaTeX
@article{bekamiri2021patentsberta,
title={PatentSBERTa: A Deep NLP based Hybrid Model for Patent Distance and Classification using Augmented SBERT},
author={Bekamiri, Hamid and Hain, Daniel S and Jurowetzki, Roman},
journal={arXiv preprint arXiv:2103.11933},
year={2021}
}
``` | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | AI-Growth-Lab/PatentSBERTa | null | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2103.11933",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.11933"
] | [] | TAGS
#sentence-transformers #pytorch #mpnet #feature-extraction #sentence-similarity #transformers #arxiv-2103.11933 #endpoints_compatible #has_space #region-us
|
# PatentSBERTa
## PatentSBERTa: A Deep NLP based Hybrid Model for Patent Distance and Classification using Augmented SBERT
### Aalborg University Business School, AI: Growth-Lab
URL
URL
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 5 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# PatentSBERTa",
"## PatentSBERTa: A Deep NLP based Hybrid Model for Patent Distance and Classification using Augmented SBERT",
"### Aalborg University Business School, AI: Growth-Lab \n\nURL\n\nURL\n\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 5 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #mpnet #feature-extraction #sentence-similarity #transformers #arxiv-2103.11933 #endpoints_compatible #has_space #region-us \n",
"# PatentSBERTa",
"## PatentSBERTa: A Deep NLP based Hybrid Model for Patent Distance and Classification using Augmented SBERT",
"### Aalborg University Business School, AI: Growth-Lab \n\nURL\n\nURL\n\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 5 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
text2text-generation | transformers |
# Model Trained Using AutoNLP
- Problem type: Machine Translation
- Model ID: 474612462
- CO2 Emissions (in grams): 133.0219882109991
## Validation Metrics
- Loss: 1.336498737335205
- Rouge1: 52.5404
- Rouge2: 31.6639
- RougeL: 50.1696
- RougeLsum: 50.3398
- Gen Len: 39.046
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/EricPeter/autonlp-EN-LUG-474612462
``` | {"language": "unk", "tags": "autonlp", "datasets": ["Eric Peter/autonlp-data-EN-LUG"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 133.0219882109991} | AI-Lab-Makerere/en_lg | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autonlp",
"unk",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"unk"
] | TAGS
#transformers #pytorch #marian #text2text-generation #autonlp #unk #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Machine Translation
- Model ID: 474612462
- CO2 Emissions (in grams): 133.0219882109991
## Validation Metrics
- Loss: 1.336498737335205
- Rouge1: 52.5404
- Rouge2: 31.6639
- RougeL: 50.1696
- RougeLsum: 50.3398
- Gen Len: 39.046
## Usage
You can use cURL to access this model:
| [
"# Model Trained Using AutoNLP\n\n- Problem type: Machine Translation\n- Model ID: 474612462\n- CO2 Emissions (in grams): 133.0219882109991",
"## Validation Metrics\n\n- Loss: 1.336498737335205\n- Rouge1: 52.5404\n- Rouge2: 31.6639\n- RougeL: 50.1696\n- RougeLsum: 50.3398\n- Gen Len: 39.046",
"## Usage\n\nYou can use cURL to access this model:"
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #autonlp #unk #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Machine Translation\n- Model ID: 474612462\n- CO2 Emissions (in grams): 133.0219882109991",
"## Validation Metrics\n\n- Loss: 1.336498737335205\n- Rouge1: 52.5404\n- Rouge2: 31.6639\n- RougeL: 50.1696\n- RougeLsum: 50.3398\n- Gen Len: 39.046",
"## Usage\n\nYou can use cURL to access this model:"
] |
text2text-generation | transformers |
# Model Trained Using AutoNLP
- Problem type: Machine Translation
- Model ID: 475112539
- CO2 Emissions (in grams): 126.34446293851818
## Validation Metrics
- Loss: 1.5376628637313843
- Rouge1: 62.4613
- Rouge2: 39.4759
- RougeL: 58.183
- RougeLsum: 58.226
- Gen Len: 26.5644
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/EricPeter/autonlp-MarianMT_lg_en-475112539
``` | {"language": "unk", "tags": "autonlp", "datasets": ["EricPeter/autonlp-data-MarianMT_lg_en"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 126.34446293851818} | AI-Lab-Makerere/lg_en | null | [
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"autonlp",
"unk",
"dataset:EricPeter/autonlp-data-MarianMT_lg_en",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"unk"
] | TAGS
#transformers #pytorch #safetensors #marian #text2text-generation #autonlp #unk #dataset-EricPeter/autonlp-data-MarianMT_lg_en #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Machine Translation
- Model ID: 475112539
- CO2 Emissions (in grams): 126.34446293851818
## Validation Metrics
- Loss: 1.5376628637313843
- Rouge1: 62.4613
- Rouge2: 39.4759
- RougeL: 58.183
- RougeLsum: 58.226
- Gen Len: 26.5644
## Usage
You can use cURL to access this model:
| [
"# Model Trained Using AutoNLP\n\n- Problem type: Machine Translation\n- Model ID: 475112539\n- CO2 Emissions (in grams): 126.34446293851818",
"## Validation Metrics\n\n- Loss: 1.5376628637313843\n- Rouge1: 62.4613\n- Rouge2: 39.4759\n- RougeL: 58.183\n- RougeLsum: 58.226\n- Gen Len: 26.5644",
"## Usage\n\nYou can use cURL to access this model:"
] | [
"TAGS\n#transformers #pytorch #safetensors #marian #text2text-generation #autonlp #unk #dataset-EricPeter/autonlp-data-MarianMT_lg_en #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Machine Translation\n- Model ID: 475112539\n- CO2 Emissions (in grams): 126.34446293851818",
"## Validation Metrics\n\n- Loss: 1.5376628637313843\n- Rouge1: 62.4613\n- Rouge2: 39.4759\n- RougeL: 58.183\n- RougeLsum: 58.226\n- Gen Len: 26.5644",
"## Usage\n\nYou can use cURL to access this model:"
] |
fill-mask | transformers |
# A Swedish Bert model
## Model description
This model follows the Bert Large model architecture as implemented in [Megatron-LM framework](https://github.com/NVIDIA/Megatron-LM). It was trained with a batch size of 512 in 600k steps. The model contains following parameters:
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 340M |
| \\(n_{layers}\\) | 24 |
| \\(n_{heads}\\) | 16 |
| \\(n_{ctx}\\) | 1024 |
| \\(n_{vocab}\\) | 30592 |
## Training data
The model is pretrained on a Swedish text corpus of around 85 GB from a variety of sources as shown below.
<figure>
| Dataset | Genre | Size(GB)|
|----------------------|------|------|
| Anföranden | Politics |0.9|
|DCEP|Politics|0.6|
|DGT|Politics|0.7|
|Fass|Medical|0.6|
|Författningar|Legal|0.1|
|Web data|Misc|45.0|
|JRC|Legal|0.4|
|Litteraturbanken|Books|0.3O|
|SCAR|Misc|28.0|
|SOU|Politics|5.3|
|Subtitles|Drama|1.3|
|Wikipedia|Facts|1.8|
## Intended uses & limitations
The raw model can be used for the usual tasks of masked language modeling or next sentence prediction. It is also often fine-tuned on a downstream task to improve its performance in a specific domain/task.
<br>
<br>
## How to use
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("AI-Nordics/bert-large-swedish-cased")
model = AutoModelForMaskedLM.from_pretrained("AI-Nordics/bert-large-swedish-cased")
| {"language": "sv"} | AI-Nordics/bert-large-swedish-cased | null | [
"transformers",
"pytorch",
"megatron-bert",
"fill-mask",
"sv",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sv"
] | TAGS
#transformers #pytorch #megatron-bert #fill-mask #sv #autotrain_compatible #endpoints_compatible #region-us
| A Swedish Bert model
====================
Model description
-----------------
This model follows the Bert Large model architecture as implemented in Megatron-LM framework. It was trained with a batch size of 512 in 600k steps. The model contains following parameters:
Training data
-------------
The model is pretrained on a Swedish text corpus of around 85 GB from a variety of sources as shown below.
Dataset: Anföranden, Genre: Politics, Size(GB): 0.9
Dataset: DCEP, Genre: Politics, Size(GB): 0.6
Dataset: DGT, Genre: Politics, Size(GB): 0.7
Dataset: Fass, Genre: Medical, Size(GB): 0.6
Dataset: Författningar, Genre: Legal, Size(GB): 0.1
Dataset: Web data, Genre: Misc, Size(GB): 45.0
Dataset: JRC, Genre: Legal, Size(GB): 0.4
Dataset: Litteraturbanken, Genre: Books, Size(GB): 0.3O
Dataset: SCAR, Genre: Misc, Size(GB): 28.0
Dataset: SOU, Genre: Politics, Size(GB): 5.3
Dataset: Subtitles, Genre: Drama, Size(GB): 1.3
Dataset: Wikipedia, Genre: Facts, Size(GB): 1.8
Intended uses & limitations
---------------------------
The raw model can be used for the usual tasks of masked language modeling or next sentence prediction. It is also often fine-tuned on a downstream task to improve its performance in a specific domain/task.
How to use
----------
'''python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from\_pretrained("AI-Nordics/bert-large-swedish-cased")
model = AutoModelForMaskedLM.from\_pretrained("AI-Nordics/bert-large-swedish-cased")
| [] | [
"TAGS\n#transformers #pytorch #megatron-bert #fill-mask #sv #autotrain_compatible #endpoints_compatible #region-us \n"
] |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1438 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 2,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 288,
"weight_decay": 0.05
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | AIDA-UPM/MSTSb_paraphrase-multilingual-MiniLM-L12-v2 | null | [
"sentence-transformers",
"pytorch",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#sentence-transformers #pytorch #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 1438 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 1438 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 1438 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1438 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 2,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 4e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 288,
"weight_decay": 0.1
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1 | null | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 1438 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 1438 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 1438 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1438 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 4e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 144,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | AIDA-UPM/MSTSb_stsb-xlm-r-multilingual | null | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 1438 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 1438 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 1438 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
text-classification | transformers |
# bertweet-base-multi-mami
This is a Bertweet model: It maps sentences & paragraphs to a 768 dimensional dense vector space and classifies them into 5 multi labels.
# Multilabels
label2id={
"misogynous": 0,
"shaming": 1,
"stereotype": 2,
"objectification": 3,
"violence": 4,
},
| {"language": "en", "license": "apache-2.0", "tags": ["text-classification", "misogyny"], "pipeline_tag": "text-classification", "widget": [{"text": "Women wear yoga pants because men don't stare at their personality", "example_title": "Misogyny detection"}]} | AIDA-UPM/bertweet-base-multi-mami | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"misogyny",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #roberta #text-classification #misogyny #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bertweet-base-multi-mami
This is a Bertweet model: It maps sentences & paragraphs to a 768 dimensional dense vector space and classifies them into 5 multi labels.
# Multilabels
label2id={
"misogynous": 0,
"shaming": 1,
"stereotype": 2,
"objectification": 3,
"violence": 4,
},
| [
"# bertweet-base-multi-mami\nThis is a Bertweet model: It maps sentences & paragraphs to a 768 dimensional dense vector space and classifies them into 5 multi labels.",
"# Multilabels\n label2id={\n \"misogynous\": 0,\n \"shaming\": 1,\n \"stereotype\": 2,\n \"objectification\": 3,\n \"violence\": 4,\n },"
] | [
"TAGS\n#transformers #pytorch #safetensors #roberta #text-classification #misogyny #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bertweet-base-multi-mami\nThis is a Bertweet model: It maps sentences & paragraphs to a 768 dimensional dense vector space and classifies them into 5 multi labels.",
"# Multilabels\n label2id={\n \"misogynous\": 0,\n \"shaming\": 1,\n \"stereotype\": 2,\n \"objectification\": 3,\n \"violence\": 4,\n },"
] |
sentence-similarity | transformers |
# mstsb-paraphrase-multilingual-mpnet-base-v2
This is a fine-tuned version of `paraphrase-multilingual-mpnet-base-v2` from [sentence-transformers](https://www.SBERT.net) model with [Semantic Textual Similarity Benchmark](http://ixa2.si.ehu.eus/stswiki/index.php/Main_Page) extended to 15 languages: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering, semantic search and measuring the similarity between two sentences.
<!--- Describe your model here -->
This model is fine-tuned version of `paraphrase-multilingual-mpnet-base-v2` for semantic textual similarity with multilingual data. The dataset used for this fine-tuning is STSb extended to 15 languages with Google Translator. For mantaining data quality the sentence pairs with a confidence value below 0.7 were dropped. The extended dataset is available at [GitHub](https://github.com/Huertas97/Multilingual-STSB). The languages included in the extended version are: ar, cs, de, en, es, fr, hi, it, ja, nl, pl, pt, ru, tr, zh-CN, zh-TW. The pooling operation used to condense the word embeddings into a sentence embedding is mean pooling (more info below).
<!-- ## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
# It support several languages
sentences = ["This is an example sentence", "Esta es otra frase de ejemplo", "最後の例文"]
# The pooling technique is automatically detected (mean pooling)
model = SentenceTransformer('mstsb-paraphrase-multilingual-mpnet-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
``` -->
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# We should define the proper pooling function: Mean pooling
# Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["This is an example sentence", "Esta es otra frase de ejemplo", "最後の例文"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2')
model = AutoModel.from_pretrained('AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
Check the test results in the Semantic Textual Similarity Tasks. The 15 languages available at the [Multilingual STSB](https://github.com/Huertas97/Multilingual-STSB) have been combined into monolingual and cross-lingual tasks, giving a total of 31 tasks. Monolingual tasks have both sentences from the same language source (e.g., Ar-Ar, Es-Es), while cross-lingual tasks have two sentences, each in a different language being one of them English (e.g., en-ar, en-es).
Here we compare the average multilingual semantic textual similairty capabilities between the `paraphrase-multilingual-mpnet-base-v2` based model and the `mstsb-paraphrase-multilingual-mpnet-base-v2` fine-tuned model across the 31 tasks. It is worth noting that both models are multilingual, but the second model is adjusted with multilingual data for semantic similarity. The average of correlation coefficients is computed by transforming each correlation coefficient to a Fisher's z value, averaging them, and then back-transforming to a correlation coefficient.
| Model | Average Spearman Cosine Test |
|:---------------------------------------------:|:------------------------------:|
| mstsb-paraphrase-multilingual-mpnet-base-v2 | 0.835890 |
| paraphrase-multilingual-mpnet-base-v2 | 0.818896 |
<br>
The following tables breakdown the performance of `mstsb-paraphrase-multilingual-mpnet-base-v2` according to the different tasks. For the sake of readability tasks have been splitted into monolingual and cross-lingual tasks.
| Monolingual Task | Pearson Cosine test | Spearman Cosine test |
|:------------------:|:---------------------:|:-----------------------:|
| en;en | 0.868048310692506 | 0.8740170943535747 |
| ar;ar | 0.8267139454193487 | 0.8284459741532022 |
| cs;cs | 0.8466821720942157 | 0.8485417688803879 |
| de;de | 0.8517285961812183 | 0.8557680051557893 |
| es;es | 0.8519185309064691 | 0.8552243211580456 |
| fr;fr | 0.8430951067985064 | 0.8466614534379704 |
| hi;hi | 0.8178258630578092 | 0.8176462079184331 |
| it;it | 0.8475909574305637 | 0.8494216064459076 |
| ja;ja | 0.8435588859386477 | 0.8456031494178619 |
| nl;nl | 0.8486765104527032 | 0.8520856765262531 |
| pl;pl | 0.8407840177883407 | 0.8443070467300299 |
| pt;pt | 0.8534880178249296 | 0.8578544068829622 |
| ru;ru | 0.8390897585455678 | 0.8423041443534423 |
| tr;tr | 0.8382125451820572 | 0.8421587450058385 |
| zh-CN;zh-CN | 0.826233678946644 | 0.8248515460782744 |
| zh-TW;zh-TW | 0.8242683809675422 | 0.8235506799952028 |
<br>
| Cross-lingual Task | Pearson Cosine test | Spearman Cosine test |
|:--------------------:|:---------------------:|:-----------------------:|
| en;ar | 0.7990830340462535 | 0.7956792016468148 |
| en;cs | 0.8381274879061265 | 0.8388713450024455 |
| en;de | 0.8414439600928739 | 0.8441971698649943 |
| en;es | 0.8442337511356952 | 0.8445035292903559 |
| en;fr | 0.8378437644605063 | 0.8387903367907733 |
| en;hi | 0.7951955086055527 | 0.7905052217683244 |
| en;it | 0.8415686372978766 | 0.8419480899107785 |
| en;ja | 0.8094306665283388 | 0.8032512280936449 |
| en;nl | 0.8389526140129767 | 0.8409310421803277 |
| en;pl | 0.8261309163979578 | 0.825976253023656 |
| en;pt | 0.8475546209070765 | 0.8506606391790897 |
| en;ru | 0.8248514914263723 | 0.8224871183202255 |
| en;tr | 0.8191803661207868 | 0.8194200775744044 |
| en;zh-CN | 0.8147678083378249 | 0.8102089470690433 |
| en;zh-TW | 0.8107272160374955 | 0.8056129680510944 |
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 687 with parameters:
```
{'batch_size': 132, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 2,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 140,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"language": "multilingual", "tags": ["feature-extraction", "sentence-similarity", "transformers", "multilingual"], "pipeline_tag": "sentence-similarity"} | AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2 | null | [
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"multilingual",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #multilingual #endpoints_compatible #region-us
| mstsb-paraphrase-multilingual-mpnet-base-v2
===========================================
This is a fine-tuned version of 'paraphrase-multilingual-mpnet-base-v2' from sentence-transformers model with Semantic Textual Similarity Benchmark extended to 15 languages: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering, semantic search and measuring the similarity between two sentences.
This model is fine-tuned version of 'paraphrase-multilingual-mpnet-base-v2' for semantic textual similarity with multilingual data. The dataset used for this fine-tuning is STSb extended to 15 languages with Google Translator. For mantaining data quality the sentence pairs with a confidence value below 0.7 were dropped. The extended dataset is available at GitHub. The languages included in the extended version are: ar, cs, de, en, es, fr, hi, it, ja, nl, pl, pt, ru, tr, zh-CN, zh-TW. The pooling operation used to condense the word embeddings into a sentence embedding is mean pooling (more info below).
Usage (HuggingFace Transformers)
--------------------------------
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
Evaluation Results
------------------
Check the test results in the Semantic Textual Similarity Tasks. The 15 languages available at the Multilingual STSB have been combined into monolingual and cross-lingual tasks, giving a total of 31 tasks. Monolingual tasks have both sentences from the same language source (e.g., Ar-Ar, Es-Es), while cross-lingual tasks have two sentences, each in a different language being one of them English (e.g., en-ar, en-es).
Here we compare the average multilingual semantic textual similairty capabilities between the 'paraphrase-multilingual-mpnet-base-v2' based model and the 'mstsb-paraphrase-multilingual-mpnet-base-v2' fine-tuned model across the 31 tasks. It is worth noting that both models are multilingual, but the second model is adjusted with multilingual data for semantic similarity. The average of correlation coefficients is computed by transforming each correlation coefficient to a Fisher's z value, averaging them, and then back-transforming to a correlation coefficient.
The following tables breakdown the performance of 'mstsb-paraphrase-multilingual-mpnet-base-v2' according to the different tasks. For the sake of readability tasks have been splitted into monolingual and cross-lingual tasks.
Training
--------
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 687 with parameters:
Loss:
'sentence\_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
Full Model Architecture
-----------------------
Citing & Authors
----------------
| [] | [
"TAGS\n#transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #multilingual #endpoints_compatible #region-us \n"
] |
text-classification | transformers |
This is a finetuned XLM-RoBERTA model for natural language inference. It has been trained with a massive ammount of data following the ANLI pipeline training. We include data from:
- [mnli](https://cims.nyu.edu/~sbowman/multinli/) {train, dev and test}
- [snli](https://nlp.stanford.edu/projects/snli/) {train, dev and test}
- [xnli](https://github.com/facebookresearch/XNLI) {train, dev and test}
- [fever](https://fever.ai/resources.html) {train, dev and test}
- [anli](https://github.com/facebookresearch/anli) {train}
The model is validated on ANLI training sets, including R1, R2 and R3. The following results can be expected on the testing splits.
| Split | Accuracy
| - | - |
| R1 | 0.6610
| R2 | 0.4990
| R3 | 0.4425
# Multilabels
label2id={
"contradiction": 0,
"entailment": 1,
"neutral": 2,
},
| {"language": "en", "license": "apache-2.0", "tags": ["natural-language-inference", "misogyny"], "pipeline_tag": "text-classification", "widget": [{"text": "Las mascarillas causan hipoxia. Wearing masks is harmful to human health", "example_title": "Natural Language Inference"}]} | AIDA-UPM/xlm-roberta-large-snli_mnli_xnli_fever_r1_r2_r3 | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"natural-language-inference",
"misogyny",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #xlm-roberta #text-classification #natural-language-inference #misogyny #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| This is a finetuned XLM-RoBERTA model for natural language inference. It has been trained with a massive ammount of data following the ANLI pipeline training. We include data from:
* mnli {train, dev and test}
* snli {train, dev and test}
* xnli {train, dev and test}
* fever {train, dev and test}
* anli {train}
The model is validated on ANLI training sets, including R1, R2 and R3. The following results can be expected on the testing splits.
Multilabels
===========
```
label2id={
"contradiction": 0,
"entailment": 1,
"neutral": 2,
},
```
| [] | [
"TAGS\n#transformers #pytorch #xlm-roberta #text-classification #natural-language-inference #misogyny #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# tests | {"tags": ["conversational"]} | AIDynamics/DialoGPT-medium-MentorDealerGuy | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# tests | [
"# tests"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# tests"
] |
text-generation | transformers |
# Uses DialoGPT | {"tags": ["conversational"]} | AJ/DialoGPT-small-ricksanchez | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Uses DialoGPT | [
"# Uses DialoGPT"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Uses DialoGPT"
] |
text-generation | transformers |
# its rick from rick and morty | {"tags": ["conversational", "humor"]} | AJ/rick-discord-bot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"humor",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #humor #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# its rick from rick and morty | [
"# its rick from rick and morty"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #humor #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# its rick from rick and morty"
] |
text-generation | null | # uses dialogpt | {"tags": ["conversational", "funny"]} | AJ/rick-sanchez-bot | null | [
"conversational",
"funny",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#conversational #funny #region-us
| # uses dialogpt | [
"# uses dialogpt"
] | [
"TAGS\n#conversational #funny #region-us \n",
"# uses dialogpt"
] |
text-generation | transformers |
# Harry Potter DialoGPT model | {"tags": ["conversational"]} | AJ-Dude/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT model | [
"# Harry Potter DialoGPT model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT model"
] |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | AK270802/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model | [
"# Harry Potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-epochs10
This model is a fine-tuned version of [AKulk/wav2vec2-base-timit-epochs5](https://huggingface.co/AKulk/wav2vec2-base-timit-epochs5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-epochs10", "results": []}]} | AKulk/wav2vec2-base-timit-epochs10 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-base-timit-epochs10
This model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs5 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
| [
"# wav2vec2-base-timit-epochs10\n\nThis model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs5 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 5\n- total_train_batch_size: 80\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-base-timit-epochs10\n\nThis model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs5 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 5\n- total_train_batch_size: 80\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3"
] |