sha
null | last_modified
null | library_name
stringclasses 154
values | text
stringlengths 1
900k
| metadata
stringlengths 2
348k
| pipeline_tag
stringclasses 45
values | id
stringlengths 5
122
| tags
sequencelengths 1
1.84k
| created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
61
| embeddings
sequencelengths 768
768
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null | transformers |
# ALBERT Base v1
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-base-v1')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"▁modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"▁modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"▁model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"▁runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"▁lingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v1')
model = AlbertModel.from_pretrained("albert-base-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v1')
model = TFAlbertModel.from_pretrained("albert-base-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-base-v1')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"▁chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"▁janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"▁shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"▁blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"▁lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"▁receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"▁janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"▁paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"▁chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"▁waitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=albert-base-v1">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]} | fill-mask | albert/albert-base-v1 | [
"transformers",
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"1909.11942"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #safetensors #albert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ALBERT Base v1
==============
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
* 12 repeating layers
* 128 embedding dimension
* 768 hidden dimension
* 12 attention heads
* 11M parameters
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Evaluation results
------------------
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] | [
78,
49,
102,
42,
135,
30
] | [
"passage: TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] | [
-0.04987791180610657,
0.06891000270843506,
-0.0038567720912396908,
0.07576975971460342,
0.060954879969358444,
0.030267491936683655,
0.09114508330821991,
0.056290049105882645,
-0.04611608013510704,
0.06996013969182968,
0.020456068217754364,
0.022172600030899048,
0.1111629456281662,
0.1304425299167633,
0.024317914620041847,
-0.29598861932754517,
0.042645711451768875,
-0.0236247219145298,
0.002042739652097225,
0.08832594007253647,
0.10129620879888535,
-0.09183517098426819,
0.026352956891059875,
0.001375344698317349,
-0.035945065319538116,
-0.015944335609674454,
0.008236582390964031,
-0.03286254033446312,
0.07073045521974564,
0.06162597984075546,
0.10977689176797867,
0.026022477075457573,
0.07895857840776443,
-0.16175468266010284,
0.0188823901116848,
0.09986098855733871,
-0.0031338760163635015,
0.07817800343036652,
0.08305495977401733,
0.01170626562088728,
0.10522093623876572,
-0.07720627635717392,
0.06353113800287247,
0.051442574709653854,
-0.11028492450714111,
-0.07907432317733765,
-0.08345724642276764,
0.12170112878084183,
0.1326330155134201,
0.02582026831805706,
-0.027875149622559547,
0.019351938739418983,
-0.047370605170726776,
0.08920744806528091,
0.21745815873146057,
-0.18890142440795898,
-0.015678156167268753,
0.0019122250378131866,
0.016134923323988914,
0.02136075682938099,
-0.10675042122602463,
-0.029404442757368088,
0.021735921502113342,
0.0067345448769629,
0.1429123878479004,
-0.02713327296078205,
-0.023711930960416794,
-0.05405138432979584,
-0.11417875438928604,
-0.08292021602392197,
0.07153569161891937,
0.009851224720478058,
-0.10543353110551834,
-0.11736105382442474,
-0.04130521044135094,
-0.06640168279409409,
-0.010727224871516228,
-0.0012313408078625798,
0.03961651027202606,
-0.017194537445902824,
0.07889790832996368,
-0.0758986547589302,
-0.08539094775915146,
-0.07618853449821472,
-0.014153067022562027,
0.08188743144273758,
0.031166965141892433,
-0.004555738531053066,
-0.03866304084658623,
0.13047587871551514,
0.005200286861509085,
-0.09113509207963943,
-0.049683019518852234,
-0.039072632789611816,
-0.14480793476104736,
-0.03524421527981758,
0.004341674502938986,
-0.08632271736860275,
-0.06737661361694336,
0.13117735087871552,
-0.04215969890356064,
0.05416467785835266,
-0.09551434963941574,
0.03638111799955368,
-0.0008659122395329177,
0.10509233921766281,
-0.09321597963571548,
-0.0220579132437706,
0.019425902515649796,
0.04001358896493912,
0.0049649253487586975,
0.004416286014020443,
0.007706280332058668,
-0.012278198264539242,
0.023361779749393463,
0.06472822278738022,
-0.010721712373197079,
0.06873764842748642,
-0.0704178735613823,
-0.039556801319122314,
0.11158503592014313,
-0.1591232419013977,
-0.0006626714603044093,
0.010702583938837051,
-0.043332695960998535,
-0.022749822586774826,
0.08657358586788177,
-0.06087089702486992,
-0.10754363238811493,
0.14736227691173553,
-0.08307110518217087,
-0.026651466265320778,
-0.07916892319917679,
-0.1319638043642044,
-0.005843511316925287,
-0.05197794362902641,
-0.07472072541713715,
-0.009345625527203083,
-0.1421598345041275,
-0.04298553615808487,
0.05770692974328995,
-0.00583612360060215,
0.014729839749634266,
-0.02837161161005497,
-0.015087348408997059,
-0.03136932849884033,
0.0021808443125337362,
0.0976104587316513,
-0.0166701041162014,
0.053229257464408875,
-0.07397007942199707,
0.08700864762067795,
0.08044450730085373,
0.031887006014585495,
-0.05848952755331993,
0.044008076190948486,
-0.19633819162845612,
0.09081694483757019,
-0.04800304025411606,
-0.04433901235461235,
-0.08986131846904755,
-0.05501105263829231,
-0.047791771590709686,
0.03878413140773773,
0.018122538924217224,
0.15301887691020966,
-0.17849138379096985,
-0.0457233302295208,
0.3346402943134308,
-0.10898575931787491,
-0.018480941653251648,
0.11913508921861649,
-0.06639072299003601,
0.050455596297979355,
0.09190824627876282,
0.06480637192726135,
-0.042641203850507736,
-0.11287733167409897,
-0.005434080958366394,
-0.07186537981033325,
-0.01083782222121954,
0.16150923073291779,
0.038224395364522934,
-0.05585676059126854,
-0.028355251997709274,
0.006917858496308327,
-0.07443507760763168,
-0.06802967190742493,
-0.0019990154542028904,
-0.030572298914194107,
0.05201089009642601,
-0.017607668414711952,
0.04117762669920921,
-0.0161021426320076,
-0.05051911249756813,
-0.020371561869978905,
-0.1094818264245987,
0.0015501499874517322,
0.0737704187631607,
-0.08664312213659286,
0.03518650308251381,
-0.047037091106176376,
-0.06811733543872833,
-0.004414196126163006,
0.015953773632645607,
-0.17568479478359222,
0.0026509396266192198,
0.0320105105638504,
-0.049580689519643784,
0.08656702935695648,
0.011071154847741127,
0.03780193626880646,
0.06390359252691269,
-0.07169223576784134,
-0.007086531724780798,
-0.00377756729722023,
-0.020577220246195793,
-0.07390101253986359,
-0.14868725836277008,
-0.07848969101905823,
-0.029124779626727104,
0.12817439436912537,
-0.1635291427373886,
0.02895996905863285,
-0.01894349418580532,
0.06119763106107712,
0.060964856296777725,
-0.0665188655257225,
0.04676054045557976,
-0.002794860862195492,
-0.03650689870119095,
-0.06213732436299324,
-0.014686024747788906,
0.009678610600531101,
-0.01276891864836216,
0.055221401154994965,
-0.1734704226255417,
-0.13264347612857819,
0.0498247854411602,
0.061187874525785446,
-0.12687934935092926,
-0.04974605143070221,
-0.050327952951192856,
-0.032707542181015015,
-0.09626644104719162,
-0.07816385477781296,
0.13799765706062317,
0.035448506474494934,
0.11904298514127731,
-0.06619001179933548,
-0.010793611407279968,
-0.006411400623619556,
-0.005975101143121719,
-0.027579141780734062,
0.05838631093502045,
0.03279012814164162,
-0.10603196918964386,
0.05799766257405281,
-0.0637509897351265,
-0.017969053238630295,
0.13013416528701782,
0.03624342009425163,
-0.11097347736358643,
0.01289969589561224,
0.03128732368350029,
0.07097718119621277,
0.08962629735469818,
-0.08716733008623123,
0.0017548430478200316,
0.05146724730730057,
-0.008812175132334232,
0.015858283266425133,
-0.10556086152791977,
0.059779535979032516,
0.051374249160289764,
-0.05677374452352524,
-0.03915179520845413,
-0.06629089266061783,
-0.006462096236646175,
0.1252470761537552,
0.015885768458247185,
0.023325856775045395,
-0.02385617606341839,
-0.050833720713853836,
-0.12965138256549835,
0.17357461154460907,
-0.07809162884950638,
-0.20363789796829224,
-0.20194220542907715,
-0.002529412042349577,
0.0013541147345677018,
0.04030464589595795,
0.008420940488576889,
-0.03561921417713165,
-0.08115416765213013,
-0.128900408744812,
0.0356157124042511,
0.025589456781744957,
-0.031105251982808113,
-0.03133585676550865,
-0.0049398490227758884,
0.014436551369726658,
-0.12176500260829926,
-0.015459389425814152,
0.01025486085563898,
-0.08131283521652222,
0.00993434526026249,
-0.03700731322169304,
0.09163293242454529,
0.14560560882091522,
-0.020888464525341988,
-0.008398766629397869,
-0.05225644260644913,
0.0917573794722557,
-0.08044613897800446,
0.11031056940555573,
0.02125055156648159,
-0.06580287963151932,
0.0683976262807846,
0.1206851601600647,
0.005875042174011469,
-0.05512324348092079,
0.050701308995485306,
0.0613584965467453,
-0.04730510339140892,
-0.27137187123298645,
-0.026465030387043953,
-0.07304171472787857,
0.0006819226546213031,
0.1312159150838852,
0.04053163528442383,
0.05715421959757805,
0.017550168558955193,
-0.10836479067802429,
0.04469930753111839,
0.09193675965070724,
0.08968669921159744,
-0.08844225108623505,
-0.0045508635230362415,
0.07698120176792145,
-0.04611203819513321,
-0.0008815931505523622,
0.07802216708660126,
-0.030308518558740616,
0.21591494977474213,
-0.061456795781850815,
0.21447865664958954,
0.09521111845970154,
-0.019085189327597618,
0.008651785552501678,
0.18069107830524445,
-0.032717205584049225,
0.04046550765633583,
-0.03126309812068939,
-0.08281909674406052,
-0.05162562057375908,
0.05927535519003868,
-0.024896986782550812,
0.023347830399870872,
-0.061546649783849716,
-0.0377025343477726,
0.016268614679574966,
0.3077353239059448,
0.057540055364370346,
-0.18650586903095245,
-0.07600899040699005,
0.01764765940606594,
-0.0358637273311615,
-0.07424195110797882,
-0.017590036615729332,
0.08205845952033997,
-0.1306903064250946,
0.05392967164516449,
-0.05738909915089607,
0.10227926075458527,
-0.08326254040002823,
-0.02323785424232483,
-0.06298957020044327,
0.08147885650396347,
-0.060410622507333755,
0.06796637177467346,
-0.2511749267578125,
0.20605681836605072,
0.021859440952539444,
0.11002813279628754,
-0.10059468448162079,
0.02645832858979702,
0.05329679697751999,
-0.00313749467022717,
0.1740439385175705,
-0.016553154215216637,
0.0010006810771301389,
-0.10796557366847992,
-0.056251753121614456,
0.005950324237346649,
0.06804516911506653,
-0.040535807609558105,
0.10114016383886337,
0.0012547513470053673,
-0.022389613091945648,
-0.0061940997838974,
0.0038860253989696503,
-0.11371035873889923,
-0.15651877224445343,
0.027182776480913162,
-0.07011847198009491,
0.009443000890314579,
-0.04546552896499634,
-0.06078031659126282,
0.048926860094070435,
0.1824159026145935,
-0.1928207278251648,
-0.07071567326784134,
-0.120723657310009,
-0.0068737296387553215,
0.09571366012096405,
-0.08840680122375488,
0.020362254232168198,
0.0013978802599012852,
0.17894607782363892,
-0.047450389713048935,
-0.05893050506711006,
0.048285964876413345,
-0.08730968832969666,
-0.1328052431344986,
-0.07094116508960724,
0.1218893751502037,
0.13055552542209625,
0.08391114324331284,
0.015547472052276134,
0.01674504764378071,
0.07080546766519547,
-0.08520075678825378,
-0.022640982642769814,
0.1230742335319519,
0.13994838297367096,
0.10374294966459274,
-0.14785467088222504,
-0.08157311379909515,
-0.09522608667612076,
0.011621871963143349,
0.054977547377347946,
0.21090534329414368,
-0.04711921885609627,
0.10593532770872116,
0.19225645065307617,
-0.11703484505414963,
-0.1966279000043869,
0.02276177890598774,
0.04422134533524513,
0.0747738853096962,
0.06743494421243668,
-0.19338901340961456,
0.024638429284095764,
0.08651763945817947,
-0.007071911357343197,
0.032642513513565063,
-0.15543526411056519,
-0.13160143792629242,
0.1073843315243721,
0.08630421757698059,
-0.07189404964447021,
-0.10004483163356781,
-0.02205394208431244,
-0.05920432135462761,
-0.11876857280731201,
0.08226751536130905,
-0.00888981856405735,
0.10515296459197998,
0.026050610467791557,
-0.06599850207567215,
0.02712726593017578,
-0.0657220110297203,
0.11064764857292175,
0.026434458792209625,
0.059723056852817535,
-0.05220205709338188,
-0.04892916604876518,
0.08242979645729065,
-0.06849166750907898,
0.15156289935112,
-0.01177175436168909,
0.021931691095232964,
-0.07093442231416702,
-0.0653325617313385,
-0.05675385892391205,
0.015825174748897552,
-0.06566999107599258,
-0.07072702795267105,
-0.05129585042595863,
0.09215641021728516,
0.0926026999950409,
-0.011358574032783508,
-0.01672818884253502,
-0.054199278354644775,
0.059426553547382355,
0.1554482877254486,
0.1272512674331665,
-0.0005749004776589572,
-0.08043273538351059,
-0.0013596467906609178,
0.0008801746880635619,
0.05711045488715172,
-0.09242992103099823,
0.0693192183971405,
0.08256310224533081,
0.05738585442304611,
0.20623429119586945,
0.024276969954371452,
-0.12551870942115784,
-0.01159715186804533,
0.02641432173550129,
-0.10655012726783752,
-0.15756605565547943,
0.021269915625452995,
-0.039781928062438965,
-0.16795748472213745,
-0.01726067252457142,
0.05461311340332031,
-0.04960256814956665,
-0.007810897659510374,
0.017395228147506714,
0.07208320498466492,
-0.011906064115464687,
0.18948613107204437,
0.03456945717334747,
0.061140041798353195,
-0.06182931363582611,
0.06441313028335571,
0.09966705739498138,
-0.09053333848714828,
0.028568722307682037,
0.06748632341623306,
-0.06303756684064865,
-0.013276646845042706,
0.0012141619808971882,
0.057772375643253326,
0.10642951726913452,
-0.02194223552942276,
-0.051939111202955246,
-0.04382885619997978,
0.04054015874862671,
0.05163094028830528,
0.004907659254968166,
0.06942638754844666,
-0.039122287184000015,
0.025439564138650894,
-0.097955122590065,
0.0554535798728466,
0.08885198831558228,
0.05525508522987366,
0.021893590688705444,
0.17043979465961456,
0.018688401207327843,
0.010671403259038925,
-0.010290044359862804,
-0.042573586106300354,
-0.078665591776371,
0.008117959834635258,
-0.08226535469293594,
0.0442872978746891,
-0.11859084665775299,
-0.04030827432870865,
-0.012005726806819439,
0.032328277826309204,
0.023858949542045593,
0.034165140241384506,
-0.040997765958309174,
-0.03686792775988579,
-0.04052053764462471,
0.051042139530181885,
-0.14443767070770264,
0.007373935543000698,
0.06446059048175812,
-0.09115152060985565,
0.06916125863790512,
-0.01978939399123192,
-0.029834646731615067,
0.009187700226902962,
-0.13178808987140656,
0.005456225480884314,
0.008524971082806587,
-0.008033264428377151,
0.020643049851059914,
-0.10482151061296463,
0.01791330985724926,
-0.0434444397687912,
-0.012254387140274048,
-0.024784661829471588,
0.03668246045708656,
-0.08980029821395874,
0.076605424284935,
-0.001303550903685391,
-0.025225408375263214,
-0.0483844131231308,
0.12015558034181595,
0.08444355428218842,
-0.014448738656938076,
0.12783128023147583,
-0.04847944900393486,
0.05409858375787735,
-0.15507198870182037,
-0.006007307209074497,
0.000023464304831577465,
0.0075576454401016235,
0.05249614268541336,
-0.05585792660713196,
0.043941207230091095,
-0.017986878752708435,
0.08088964223861694,
0.023816468194127083,
-0.027223745360970497,
0.036859843879938126,
-0.07155288755893707,
0.012539942748844624,
0.007892051711678505,
0.05102134868502617,
-0.03697701171040535,
-0.07291526347398758,
-0.0012818665709346533,
0.0031067440286278725,
0.01093382015824318,
0.11251314729452133,
0.24944539368152618,
0.11076386272907257,
0.034173496067523956,
0.02198111079633236,
0.012889728881418705,
-0.05014938488602638,
-0.12951189279556274,
-0.058384738862514496,
0.06487821787595749,
0.048019889742136,
-0.004519649315625429,
0.09591026604175568,
0.1422439068555832,
-0.17380276322364807,
0.1366731971502304,
0.03376808390021324,
-0.09039575606584549,
-0.08771564066410065,
-0.20536622405052185,
-0.01738937385380268,
0.07130089402198792,
-0.0324445441365242,
-0.10910674929618835,
0.028811777010560036,
0.102176234126091,
0.03237506374716759,
-0.007144032046198845,
0.13545505702495575,
-0.055146679282188416,
-0.07136491686105728,
0.07389917224645615,
0.028431270271539688,
-0.0008778737392276525,
-0.009010174311697483,
-0.010628723539412022,
0.04767891392111778,
0.0025209812447428703,
0.08589811623096466,
0.05082087591290474,
0.05572763457894325,
0.010274817235767841,
-0.01101613137871027,
-0.0678320899605751,
0.024080850183963776,
-0.016608038917183876,
0.0668918788433075,
0.21076415479183197,
0.0500149205327034,
-0.04960596561431885,
0.003973298240453005,
0.1174856573343277,
-0.04396069422364235,
-0.04482376202940941,
-0.1455005705356598,
0.1795206367969513,
0.04971127212047577,
0.0009589996188879013,
0.038283996284008026,
-0.11515982449054718,
0.0281513724476099,
0.21164372563362122,
0.14878606796264648,
0.016249822452664375,
0.012621764093637466,
-0.007303356193006039,
0.016950698569417,
0.026065127924084663,
0.13278646767139435,
-0.011669948697090149,
0.21578529477119446,
-0.01633637584745884,
0.09814999997615814,
0.0024761296808719635,
-0.06272967159748077,
-0.03240612521767616,
0.08177103847265244,
-0.0008656419231556356,
-0.012728924863040447,
-0.04681495204567909,
0.055611733347177505,
-0.01714894361793995,
-0.31567227840423584,
0.012237786315381527,
-0.04868517816066742,
-0.1254158914089203,
-0.023182515054941177,
-0.06684986501932144,
0.05350940674543381,
0.06716711819171906,
0.03296513855457306,
0.009004213847219944,
0.1972811073064804,
0.0281345434486866,
-0.05271841585636139,
-0.09757626056671143,
0.06308523565530777,
-0.022851819172501564,
0.24020808935165405,
0.019813090562820435,
0.03043348900973797,
0.06749746203422546,
0.009173648431897163,
-0.09155158698558807,
0.031255029141902924,
0.00743226008489728,
0.015691163018345833,
0.04409218579530716,
0.16043433547019958,
-0.05043864995241165,
-0.0414850153028965,
0.004584308713674545,
-0.08503957837820053,
0.05209485813975334,
-0.07800783216953278,
-0.05056281387805939,
-0.1156078651547432,
0.08175686001777649,
-0.060716353356838226,
0.11665372550487518,
0.20052431523799896,
-0.0006877496489323676,
0.021003684028983116,
-0.05180639028549194,
-0.007811359129846096,
0.03329697623848915,
0.11431516706943512,
-0.014569434337317944,
-0.16271153092384338,
-0.002966578584164381,
-0.0848386213183403,
0.016595380380749702,
-0.28533440828323364,
-0.04658020660281181,
0.023566920310258865,
-0.07213722169399261,
-0.028743062168359756,
0.07366034388542175,
0.027413520961999893,
0.06327307969331741,
-0.02557562291622162,
-0.05296581611037254,
0.01707625575363636,
0.11096805334091187,
-0.14310084283351898,
-0.040127020329236984
] |
null | null | transformers |
# ALBERT Base v2
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-base-v2')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"▁modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"▁modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"▁model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"▁runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"▁lingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertModel.from_pretrained("albert-base-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = TFAlbertModel.from_pretrained("albert-base-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-base-v2')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"▁chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"▁janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"▁shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"▁blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"▁lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"▁receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"▁janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"▁paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"▁chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"▁waitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | fill-mask | albert/albert-base-v2 | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"1909.11942"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #rust #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ALBERT Base v2
==============
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
* 12 repeating layers
* 128 embedding dimension
* 768 hidden dimension
* 12 attention heads
* 11M parameters
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Evaluation results
------------------
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
### BibTeX entry and citation info
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] | [
80,
49,
102,
42,
135,
11
] | [
"passage: TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:### BibTeX entry and citation info"
] | [
-0.05055678263306618,
0.09099909663200378,
-0.004384537693113089,
0.07027137279510498,
0.04119673743844032,
0.023917432874441147,
0.09916941821575165,
0.05107157677412033,
-0.04390188679099083,
0.05791522189974785,
0.021559549495577812,
0.02768748067319393,
0.12242377549409866,
0.13866634666919708,
0.029290784150362015,
-0.29207146167755127,
0.04908248409628868,
-0.01729021593928337,
-0.0071801431477069855,
0.09662900120019913,
0.10887059569358826,
-0.10174515098333359,
0.012981083244085312,
0.009698862209916115,
-0.03884202241897583,
-0.024451594799757004,
0.011133597232401371,
-0.03991340100765228,
0.06971581280231476,
0.05289267748594284,
0.12909354269504547,
0.027376851066946983,
0.06652632355690002,
-0.1563752442598343,
0.0209979098290205,
0.08745398372411728,
0.0031688993331044912,
0.0860607698559761,
0.08564659208059311,
0.015596931800246239,
0.08298426866531372,
-0.0779125764966011,
0.07286544889211655,
0.0375308133661747,
-0.10772234946489334,
-0.1091049462556839,
-0.09003613889217377,
0.1187710240483284,
0.11802081763744354,
0.01928381808102131,
-0.03612149506807327,
0.041811034083366394,
-0.050186462700366974,
0.08301109820604324,
0.21207550168037415,
-0.1882152259349823,
-0.01966557838022709,
-0.02197369560599327,
0.0007443972863256931,
0.0012540306197479367,
-0.1080903634428978,
-0.037700917571783066,
0.012708039954304695,
0.010912763886153698,
0.13874822854995728,
-0.008254488930106163,
0.005265838000923395,
-0.06988674402236938,
-0.10827739536762238,
-0.08608003705739975,
0.05847180634737015,
-0.008261515758931637,
-0.1044844463467598,
-0.1454005390405655,
-0.0301397405564785,
-0.041268490254879,
-0.013558847829699516,
-0.00915059819817543,
0.03102875128388405,
-0.022809267044067383,
0.08028736710548401,
-0.059608135372400284,
-0.08350798487663269,
-0.061969202011823654,
-0.019628971815109253,
0.11888734251260757,
0.04185881093144417,
-0.003947356250137091,
-0.043811116367578506,
0.12467115372419357,
0.006045548245310783,
-0.10870197415351868,
-0.03234967216849327,
-0.049895189702510834,
-0.12492382526397705,
-0.028417738154530525,
0.01153323333710432,
-0.0918726846575737,
-0.06812728941440582,
0.13110069930553436,
-0.06774993240833282,
0.0480290912091732,
-0.10325437039136887,
0.027755090966820717,
0.02966126799583435,
0.09990786761045456,
-0.10235212743282318,
-0.008527065627276897,
0.024899236857891083,
0.039659712463617325,
0.020953578874468803,
0.003163746325299144,
0.015854598954319954,
-0.0018676755717024207,
0.05058040842413902,
0.07048183679580688,
-0.002867481904104352,
0.05583687126636505,
-0.08000782132148743,
-0.04640427231788635,
0.1137675940990448,
-0.16378234326839447,
0.0035233343951404095,
0.004789949860423803,
-0.031377796083688736,
-0.028029421344399452,
0.06560244411230087,
-0.050610508769750595,
-0.10330680012702942,
0.14696185290813446,
-0.07465970516204834,
-0.03346638381481171,
-0.07708600163459778,
-0.1437397599220276,
-0.007099711336195469,
-0.06260421127080917,
-0.06422128528356552,
-0.005418506450951099,
-0.11904028803110123,
-0.027143461629748344,
0.05672869458794594,
-0.014076735824346542,
0.00015012874791864306,
-0.01606586016714573,
-0.0221067126840353,
-0.026038940995931625,
0.0071799419820308685,
0.09928011894226074,
-0.013758199289441109,
0.05285939946770668,
-0.06749943643808365,
0.08930882811546326,
0.09272991865873337,
0.03475763648748398,
-0.07330720126628876,
0.04370653256773949,
-0.22841398417949677,
0.08608915656805038,
-0.04804619401693344,
-0.05585523322224617,
-0.07585342228412628,
-0.05619880184531212,
-0.03960845619440079,
0.0328846201300621,
0.02694007009267807,
0.1526130586862564,
-0.19625122845172882,
-0.045745331794023514,
0.32844415307044983,
-0.12561345100402832,
-0.0005710699479095638,
0.12602411210536957,
-0.07525362819433212,
0.036269497126340866,
0.09209021180868149,
0.05464354529976845,
-0.04432491213083267,
-0.10260000079870224,
-0.010535825043916702,
-0.08273570984601974,
-0.001040038070641458,
0.18116997182369232,
0.04501057043671608,
-0.04752656817436218,
-0.03606808930635452,
0.007412663660943508,
-0.07516311854124069,
-0.05973329022526741,
-0.010894007980823517,
-0.032828282564878464,
0.05048353224992752,
-0.012776178307831287,
0.07242101430892944,
-0.0007087129051797092,
-0.05092724785208702,
-0.03333953022956848,
-0.11461865156888962,
-0.015040767379105091,
0.0685790404677391,
-0.08288519829511642,
0.027309754863381386,
-0.06215762346982956,
-0.04385608434677124,
-0.025159558281302452,
0.004733825568109751,
-0.1850510537624359,
0.013499758206307888,
0.03711953014135361,
-0.07124824076890945,
0.07506762444972992,
-0.0011379924835637212,
0.028178688138723373,
0.06965227425098419,
-0.06552370637655258,
-0.003347529796883464,
-0.009879467077553272,
-0.020032044500112534,
-0.06932847201824188,
-0.1575104147195816,
-0.07031049579381943,
-0.03987148031592369,
0.11456474661827087,
-0.13880610466003418,
0.022004442289471626,
-0.020117048174142838,
0.07346667349338531,
0.05719655007123947,
-0.06247284263372421,
0.052909478545188904,
0.001435651327483356,
-0.0353950597345829,
-0.05258410423994064,
-0.012571298517286777,
0.0015226694522425532,
-0.011623801663517952,
0.05508623272180557,
-0.1574729084968567,
-0.12654778361320496,
0.058549754321575165,
0.06282775104045868,
-0.11602729558944702,
-0.020939024165272713,
-0.05141590163111687,
-0.029723765328526497,
-0.09245176613330841,
-0.06472311168909073,
0.1560172438621521,
0.04540332779288292,
0.10344091802835464,
-0.08102048933506012,
-0.018930377438664436,
0.0004074648895766586,
-0.008611445315182209,
-0.04436634108424187,
0.051728006452322006,
0.013559992425143719,
-0.11630281805992126,
0.05803702399134636,
-0.04299497604370117,
0.009453453123569489,
0.13715922832489014,
0.03246365487575531,
-0.09605429321527481,
0.010334000922739506,
0.005361305549740791,
0.060976579785346985,
0.08870366215705872,
-0.08151968568563461,
0.006335836835205555,
0.05258937552571297,
-0.013483463786542416,
0.0181618370115757,
-0.1023845449090004,
0.05960524454712868,
0.047908008098602295,
-0.049688734114170074,
-0.04193876311182976,
-0.06749950349330902,
0.0059499591588974,
0.11993761360645294,
0.02190317027270794,
0.027498194947838783,
-0.025374136865139008,
-0.05540440231561661,
-0.11612538993358612,
0.1649337261915207,
-0.08579718321561813,
-0.2384132593870163,
-0.18396544456481934,
-0.008320980705320835,
-0.00010432473936816677,
0.037413910031318665,
-0.003882280783727765,
-0.03767438605427742,
-0.07821223884820938,
-0.11609597504138947,
0.0440344512462616,
0.03376735374331474,
-0.032862450927495956,
-0.024695977568626404,
-0.01328793354332447,
0.016752369701862335,
-0.12551778554916382,
-0.017650263383984566,
0.001325317658483982,
-0.0706927478313446,
0.01925577037036419,
-0.024098984897136688,
0.07385404407978058,
0.14011868834495544,
-0.023018255829811096,
-0.022743593901395798,
-0.04562962427735329,
0.10618871450424194,
-0.08310041576623917,
0.10168968886137009,
0.0227205790579319,
-0.08781067281961441,
0.06577102839946747,
0.12783966958522797,
0.005972014274448156,
-0.06483586877584457,
0.04353654757142067,
0.05317901074886322,
-0.0404520183801651,
-0.2565879821777344,
-0.030400607734918594,
-0.06997120380401611,
-0.01994449831545353,
0.1306944638490677,
0.030900239944458008,
0.0408596508204937,
0.019293786957859993,
-0.11206378042697906,
0.02735678292810917,
0.0939096063375473,
0.08177809417247772,
-0.09806794673204422,
-0.0015534772537648678,
0.08418218046426773,
-0.03480110689997673,
-0.0048156785778701305,
0.07372821122407913,
-0.03629087656736374,
0.20833970606327057,
-0.054676201194524765,
0.19677790999412537,
0.1085624173283577,
0.0023889103904366493,
0.02890644036233425,
0.1800643503665924,
-0.037587400525808334,
0.03580458089709282,
-0.02988017536699772,
-0.07899163663387299,
-0.06360991299152374,
0.03698522597551346,
-0.009360737167298794,
0.04136291891336441,
-0.06359434127807617,
-0.06072206050157547,
0.012346032075583935,
0.29143086075782776,
0.06783121079206467,
-0.16162659227848053,
-0.08276039361953735,
0.032843638211488724,
-0.04078557342290878,
-0.08281256258487701,
-0.01591482199728489,
0.06437599658966064,
-0.1325078159570694,
0.0539938248693943,
-0.06243344768881798,
0.09595087170600891,
-0.09618287533521652,
-0.019390402361750603,
-0.08458630740642548,
0.07721336930990219,
-0.05434669181704521,
0.06610174477100372,
-0.23924855887889862,
0.19908243417739868,
0.015783516690135002,
0.11673237383365631,
-0.10311195999383926,
0.021052883937954903,
0.05668316036462784,
-0.013600663281977177,
0.1692766398191452,
-0.014369887299835682,
0.015857037156820297,
-0.09373794496059418,
-0.07494767010211945,
0.005463492125272751,
0.05471475049853325,
-0.04001633822917938,
0.09918954223394394,
0.00851359497755766,
-0.010422947816550732,
-0.010425890795886517,
0.009138698689639568,
-0.11942368000745773,
-0.14815358817577362,
0.028641242533922195,
-0.1021457388997078,
-0.000518577522598207,
-0.059522852301597595,
-0.08041670173406601,
0.031369488686323166,
0.18332722783088684,
-0.18697236478328705,
-0.0870252177119255,
-0.11612845212221146,
0.0009427975164726377,
0.10487669706344604,
-0.08778450638055801,
0.025558024644851685,
-0.00443367101252079,
0.20351137220859528,
-0.057490136474370956,
-0.05539412796497345,
0.03708653151988983,
-0.07980187982320786,
-0.14400246739387512,
-0.06275784224271774,
0.13241508603096008,
0.15077030658721924,
0.0854620561003685,
0.022358713671565056,
0.027905166149139404,
0.07465308159589767,
-0.08004941791296005,
-0.019278625026345253,
0.11839063465595245,
0.14687418937683105,
0.10563888400793076,
-0.11336728185415268,
-0.09799381345510483,
-0.11498218774795532,
0.021282818168401718,
0.06528329104185104,
0.21846377849578857,
-0.04900497570633888,
0.13035067915916443,
0.21079221367835999,
-0.1144995465874672,
-0.19988983869552612,
0.010686660185456276,
0.053986743092536926,
0.0882355198264122,
0.06724447011947632,
-0.20457033812999725,
0.02415788359940052,
0.0606384202837944,
-0.008578797802329063,
0.009378772228956223,
-0.14338649809360504,
-0.1303051859140396,
0.1360543668270111,
0.10389585793018341,
-0.06672126799821854,
-0.07526802271604538,
-0.021015914157032967,
-0.05885761231184006,
-0.11802764236927032,
0.1057247519493103,
-0.0070544518530368805,
0.09797045588493347,
0.01820535399019718,
-0.0646916851401329,
0.031833089888095856,
-0.06529070436954498,
0.11481382697820663,
0.008380039595067501,
0.06798720359802246,
-0.055189263075590134,
-0.062363266944885254,
0.08468800038099289,
-0.055735521018505096,
0.13747256994247437,
0.018590176478028297,
0.01810639537870884,
-0.05226053670048714,
-0.07032202929258347,
-0.06883320212364197,
0.012825551442801952,
-0.07361283898353577,
-0.06857822090387344,
-0.047807104885578156,
0.10706589370965958,
0.08307591825723648,
-0.010625003837049007,
-0.0009167797397822142,
-0.050161559134721756,
0.0632874146103859,
0.12703923881053925,
0.1494339108467102,
0.008325870148837566,
-0.05218067765235901,
0.007281046360731125,
-0.0016258173855021596,
0.05191807448863983,
-0.07684588432312012,
0.07461564242839813,
0.0794014111161232,
0.04838152602314949,
0.19315281510353088,
0.02325701154768467,
-0.142495259642601,
-0.017066359519958496,
0.02643107995390892,
-0.10995277762413025,
-0.16471132636070251,
0.028177985921502113,
-0.030238954350352287,
-0.1637333780527115,
-0.03671637549996376,
0.05035709962248802,
-0.04396576061844826,
-0.02552691660821438,
0.01383124478161335,
0.07409906387329102,
-0.010468129999935627,
0.18425802886486053,
0.042827121913433075,
0.0589858703315258,
-0.06885954737663269,
0.07728389650583267,
0.09694231301546097,
-0.07210976630449295,
0.030471637845039368,
0.04323302581906319,
-0.0687507838010788,
-0.004646081943064928,
0.003717638086527586,
0.08169835805892944,
0.1445920616388321,
-0.02818714641034603,
-0.04621712863445282,
-0.063943050801754,
0.03872688487172127,
0.04550402984023094,
0.009638858027756214,
0.06649274379014969,
-0.050124555826187134,
0.014926978386938572,
-0.1001020297408104,
0.07163011282682419,
0.09454111009836197,
0.03948003798723221,
0.032464995980262756,
0.15592126548290253,
0.026405051350593567,
0.014263301156461239,
-0.014661112800240517,
-0.04412928223609924,
-0.0744934231042862,
0.008652379736304283,
-0.0755857527256012,
0.04396164044737816,
-0.11816049367189407,
-0.04181394353508949,
-0.022633889690041542,
0.016136672347784042,
0.02114465832710266,
0.025749430060386658,
-0.03717684745788574,
-0.026322772726416588,
-0.04239288717508316,
0.03539055213332176,
-0.13745488226413727,
0.00009415570821147412,
0.07724011689424515,
-0.09445565193891525,
0.07346640527248383,
-0.022977584972977638,
-0.017255660146474838,
0.01690569333732128,
-0.14692005515098572,
-0.0024433699436485767,
-0.010271651670336723,
-0.00919137429445982,
0.013355801813304424,
-0.15198327600955963,
0.01097149308770895,
-0.03136955574154854,
-0.03188063204288483,
-0.023095805197954178,
0.05142655223608017,
-0.08776117116212845,
0.07206930965185165,
0.027395330369472504,
-0.021341875195503235,
-0.04676660895347595,
0.12662972509860992,
0.090763159096241,
-0.015335298143327236,
0.13453204929828644,
-0.035816751420497894,
0.056697215884923935,
-0.14741191267967224,
-0.009868375957012177,
-0.01536822970956564,
0.00032927413121797144,
0.06397227942943573,
-0.05214161425828934,
0.041056353598833084,
-0.011996676214039326,
0.10232064127922058,
0.022887295112013817,
-0.042668428272008896,
0.03601415455341339,
-0.05628829449415207,
-0.005597742740064859,
0.018106037750840187,
0.061333686113357544,
-0.04421855881810188,
-0.07506196200847626,
0.005433985963463783,
0.0043972814455628395,
-0.0053029488772153854,
0.11491648852825165,
0.24855327606201172,
0.12237918376922607,
0.07464651763439178,
-0.0007398634916171432,
0.006823705974966288,
-0.0313095860183239,
-0.11998309940099716,
-0.041562438011169434,
0.067433200776577,
0.028999576345086098,
0.005731928627938032,
0.08966781944036484,
0.13738232851028442,
-0.17443403601646423,
0.15219613909721375,
0.02585243247449398,
-0.09238885343074799,
-0.08271698653697968,
-0.22274832427501678,
-0.015188462100923061,
0.09854485839605331,
-0.029615070670843124,
-0.11608943343162537,
0.017063861712813377,
0.10244596004486084,
0.02982226572930813,
-0.017690978944301605,
0.1366339921951294,
-0.056998349726200104,
-0.07269489020109177,
0.08182086795568466,
0.03578753396868706,
-0.0053019775077700615,
-0.00435176445171237,
0.004765606950968504,
0.035677630454301834,
0.032474320381879807,
0.08922262489795685,
0.05608062446117401,
0.042345207184553146,
0.0033381935209035873,
-0.0070412675850093365,
-0.08297869563102722,
0.02924809418618679,
-0.030349144712090492,
0.07948771864175797,
0.2061949372291565,
0.04307668283581734,
-0.03591972216963768,
0.0013099947245791554,
0.11623276770114899,
-0.026352612301707268,
-0.06312166899442673,
-0.1397826224565506,
0.15697351098060608,
0.04320903494954109,
-0.0015693538589403033,
0.05613675341010094,
-0.1104857474565506,
0.0428302101790905,
0.1931891292333603,
0.16164307296276093,
0.020999040454626083,
0.016565801575779915,
-0.00290546752512455,
0.012415917590260506,
0.018711784854531288,
0.13206158578395844,
-0.00917553249746561,
0.2130434364080429,
-0.026819052174687386,
0.11553585529327393,
-0.020761124789714813,
-0.06504759937524796,
-0.025966892018914223,
0.07952176034450531,
0.008362319320440292,
-0.012257283553481102,
-0.06245846673846245,
0.05680794268846512,
-0.006090587470680475,
-0.32450902462005615,
0.013147820718586445,
-0.0469900481402874,
-0.12940512597560883,
-0.01612616889178753,
-0.06822452694177628,
0.04724787548184395,
0.0597088523209095,
0.04066575691103935,
0.014506194740533829,
0.1989477276802063,
0.03292596712708473,
-0.025944065302610397,
-0.07659246772527695,
0.07512518763542175,
-0.02567293494939804,
0.23261000216007233,
0.022044239565730095,
0.039931681007146835,
0.06825452297925949,
0.006806537043303251,
-0.08778049796819687,
0.02106558158993721,
0.002947683446109295,
0.0444868803024292,
0.03330504521727562,
0.1754404753446579,
-0.027475234121084213,
-0.06518494337797165,
0.013502541929483414,
-0.0766221284866333,
0.05746171250939369,
-0.11209157109260559,
-0.06499194353818893,
-0.09794683754444122,
0.10461920499801636,
-0.06547245383262634,
0.11880125105381012,
0.19397591054439545,
-0.006027694325894117,
0.014478112570941448,
-0.05847742408514023,
0.000721202464774251,
0.018852369859814644,
0.1283992975950241,
-0.015089712105691433,
-0.17722629010677338,
-0.004518432542681694,
-0.04975723475217819,
0.021315859630703926,
-0.28892797231674194,
-0.046813417226076126,
0.02410595491528511,
-0.0807751789689064,
-0.03406420722603798,
0.06370332837104797,
0.013084083795547485,
0.07087649405002594,
-0.03351610153913498,
-0.0419497974216938,
0.0009544159402139485,
0.10139472037553787,
-0.1352558583021164,
-0.028292352333664894
] |
null | null | transformers |
# ALBERT Large v1
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the large model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 24 repeating layers
- 128 embedding dimension
- 1024 hidden dimension
- 16 attention heads
- 17M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-large-v1')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-large-v1')
model = AlbertModel.from_pretrained("albert-large-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-large-v1')
model = TFAlbertModel.from_pretrained("albert-large-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-large-v1')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | fill-mask | albert/albert-large-v1 | [
"transformers",
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"1909.11942"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ALBERT Large v1
===============
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the large model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
* 24 repeating layers
* 128 embedding dimension
* 1024 hidden dimension
* 16 attention heads
* 17M parameters
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Evaluation results
------------------
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
### BibTeX entry and citation info
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] | [
70,
49,
102,
42,
135,
11
] | [
"passage: TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:### BibTeX entry and citation info"
] | [
-0.05667823925614357,
0.08347521722316742,
-0.003313080407679081,
0.07908394187688828,
0.05151635780930519,
0.008104290813207626,
0.08640990406274796,
0.05556471273303032,
-0.08198466897010803,
0.06249178946018219,
0.04316376522183418,
0.04599187523126602,
0.11425124853849411,
0.14828482270240784,
0.05441417172551155,
-0.31177178025245667,
0.04608045145869255,
-0.020020676776766777,
0.03857671841979027,
0.10612472891807556,
0.10209345072507858,
-0.10734426230192184,
0.021067384630441666,
0.013857576064765453,
-0.048613615334033966,
-0.028752971440553665,
0.005298438016325235,
-0.04515842720866203,
0.07352491468191147,
0.04612286388874054,
0.0922040119767189,
0.03058130480349064,
0.053169362246990204,
-0.14963218569755554,
0.023935038596391678,
0.07624046504497528,
0.013490135781466961,
0.08375383168458939,
0.07991432398557663,
0.03118901699781418,
0.10077029466629028,
-0.07753490656614304,
0.06471040099859238,
0.050635404884815216,
-0.10193444788455963,
-0.0981023982167244,
-0.06296634674072266,
0.0944828987121582,
0.09361127018928528,
0.05350752919912338,
-0.03670292720198631,
0.09505821019411087,
-0.06486940383911133,
0.08291499316692352,
0.21466390788555145,
-0.2051638513803482,
-0.013802370056509972,
-0.01321291271597147,
-0.03628116101026535,
-0.0007366263889707625,
-0.10266046226024628,
-0.059840600937604904,
0.025059090927243233,
0.0355818085372448,
0.10717219114303589,
-0.004180503077805042,
0.029566893354058266,
-0.08144653588533401,
-0.1283537745475769,
-0.09349779784679413,
0.05898304656147957,
-0.0003399956913199276,
-0.11279486864805222,
-0.15138360857963562,
-0.02986915037035942,
-0.06170433387160301,
-0.008535300381481647,
0.007245121989399195,
0.018330972641706467,
-0.013427041471004486,
0.04931262880563736,
-0.02224128320813179,
-0.093492291867733,
-0.04333905130624771,
-0.043764155358076096,
0.13635140657424927,
0.04767867177724838,
-0.007216016296297312,
-0.045385513454675674,
0.12406949698925018,
-0.000851496821269393,
-0.13038374483585358,
-0.0327020138502121,
-0.07112676650285721,
-0.11813805997371674,
-0.02963237464427948,
-0.006708831992000341,
-0.10501854866743088,
-0.06797342002391815,
0.12423653155565262,
-0.07798149436712265,
0.05437630042433739,
-0.0784304291009903,
0.022871922701597214,
0.05841879919171333,
0.08743103593587875,
-0.14053331315517426,
0.02959899604320526,
0.015001756139099598,
0.01136468630284071,
0.04161394387483597,
-0.018219731748104095,
-0.00242281099781394,
-0.004762989468872547,
0.0568198598921299,
0.05835847929120064,
-0.001974194310605526,
0.08009760826826096,
-0.06254192441701889,
-0.07601025700569153,
0.10993676632642746,
-0.14706876873970032,
-0.03892352432012558,
-0.008053434081375599,
-0.046007756143808365,
-0.02635137178003788,
0.040004801005125046,
-0.057096224278211594,
-0.12311587482690811,
0.13907891511917114,
-0.08447765558958054,
-0.04885484278202057,
-0.07160034775733948,
-0.16134117543697357,
-0.0016862069023773074,
-0.04117263853549957,
-0.06486032158136368,
-0.04122001305222511,
-0.08696572482585907,
-0.016246184706687927,
0.043748077005147934,
-0.011056154035031796,
-0.01930731162428856,
-0.009410565719008446,
-0.04979243129491806,
-0.013574386946856976,
-0.005766225978732109,
0.09534800052642822,
-0.01692173257470131,
0.05832089111208916,
-0.06143723800778389,
0.07835053652524948,
0.08465936034917831,
0.031857237219810486,
-0.09501247107982635,
0.026698008179664612,
-0.2501128017902374,
0.08174491673707962,
-0.034621451050043106,
-0.03844170644879341,
-0.06883572041988373,
-0.09281810373067856,
-0.07061607390642166,
0.015432560816407204,
0.0387507863342762,
0.15244798362255096,
-0.2080317884683609,
-0.04226703569293022,
0.2904368042945862,
-0.1350552886724472,
0.01451521459966898,
0.13072185218334198,
-0.0713314339518547,
0.01751595363020897,
0.08790460228919983,
0.09417057037353516,
-0.014229902997612953,
-0.09511514008045197,
-0.028310652822256088,
-0.04999973624944687,
-0.028375893831253052,
0.1582123041152954,
0.04853496327996254,
-0.043219465762376785,
-0.06141131743788719,
0.008504019118845463,
-0.07097221165895462,
-0.024667538702487946,
-0.019781677052378654,
-0.0260261632502079,
0.038894202560186386,
-0.009833280928432941,
0.07844606786966324,
0.02107003703713417,
-0.048781998455524445,
-0.038394223898649216,
-0.12271296977996826,
-0.0346834734082222,
0.06951428949832916,
-0.08361560851335526,
0.029555145651102066,
-0.07604137063026428,
-0.022762438282370567,
-0.02005051262676716,
-0.006830669939517975,
-0.2090151607990265,
0.01866772770881653,
0.06276828795671463,
-0.09720693528652191,
0.07355233281850815,
0.02601015567779541,
0.03857722505927086,
0.06961709260940552,
-0.05167261138558388,
0.003973667975515127,
-0.013989304192364216,
-0.0239170603454113,
-0.0650244802236557,
-0.1560194045305252,
-0.05928272753953934,
-0.03453580662608147,
0.1000426635146141,
-0.07893800735473633,
0.0003954787098336965,
-0.009810146875679493,
0.055691394954919815,
0.0360041968524456,
-0.06607580929994583,
0.0716370940208435,
-0.003652713494375348,
-0.03977478668093681,
-0.05532069131731987,
0.0014715513680130243,
0.01717536151409149,
-0.009011531248688698,
0.045689478516578674,
-0.1977272629737854,
-0.13962651789188385,
0.07614663988351822,
0.037217702716588974,
-0.11761642247438431,
-0.019519798457622528,
-0.0673246681690216,
-0.011560874991118908,
-0.07344415783882141,
-0.050853319466114044,
0.20474660396575928,
0.04369620233774185,
0.10310535132884979,
-0.08568781614303589,
-0.022730233147740364,
0.01906990259885788,
0.015076739713549614,
-0.04695496708154678,
0.063705675303936,
0.011925446800887585,
-0.10095276683568954,
0.03809276595711708,
-0.058593690395355225,
0.03935368359088898,
0.1297035664319992,
0.03513204678893089,
-0.09990403801202774,
0.012732245028018951,
0.012076220475137234,
0.04599462449550629,
0.06994232535362244,
-0.10076364874839783,
0.0162173081189394,
0.06462064385414124,
0.002921542152762413,
0.015229906886816025,
-0.08446197956800461,
0.04047343507409096,
0.05036171153187752,
-0.041738905012607574,
-0.05374935641884804,
-0.058302946388721466,
0.0038186111487448215,
0.11634024232625961,
0.047963038086891174,
0.033262453973293304,
-0.019520509988069534,
-0.05178806930780411,
-0.11070885509252548,
0.18263134360313416,
-0.0822695791721344,
-0.26426514983177185,
-0.15253695845603943,
0.007433051709085703,
0.0012859946582466364,
0.02871333807706833,
0.00429975101724267,
-0.06206784024834633,
-0.0929885059595108,
-0.11483374983072281,
0.049619223922491074,
0.00912874098867178,
-0.017897751182317734,
-0.012415913864970207,
-0.03527582064270973,
0.019651276990771294,
-0.1340654045343399,
-0.014365095645189285,
-0.0037047553341835737,
-0.048376645892858505,
0.01831492781639099,
-0.022763196378946304,
0.08083142340183258,
0.1572645753622055,
-0.005248923320323229,
-0.006286324467509985,
-0.044124893844127655,
0.1688375324010849,
-0.07765830308198929,
0.0828627422451973,
0.061023078858852386,
-0.0812448188662529,
0.05813733860850334,
0.13448728621006012,
0.0022408589720726013,
-0.06498376280069351,
0.07227900624275208,
0.05173575133085251,
-0.049122538417577744,
-0.23325076699256897,
-0.042947884649038315,
-0.05951610580086708,
-0.008816288784146309,
0.13352344930171967,
0.03941218554973602,
0.02987593226134777,
0.02038085088133812,
-0.10872034728527069,
-0.010978702455759048,
0.05702363699674606,
0.0914691686630249,
-0.07334193587303162,
-0.003366179997101426,
0.07902194559574127,
-0.040312472730875015,
-0.03470559045672417,
0.0900016576051712,
-0.08550307154655457,
0.16805888712406158,
-0.07445131987333298,
0.20787298679351807,
0.10984235256910324,
0.037040457129478455,
0.027606183663010597,
0.16469165682792664,
-0.03948482125997543,
0.017898844555020332,
-0.03469155728816986,
-0.0857800841331482,
-0.062400780618190765,
0.018050827085971832,
0.0013524411479011178,
0.045462656766176224,
-0.08660019189119339,
-0.05919463559985161,
0.0024352793116122484,
0.2976292371749878,
0.06353507190942764,
-0.14988069236278534,
-0.0939420610666275,
0.011053246445953846,
-0.041541650891304016,
-0.07711537182331085,
-0.0004088141140528023,
0.09243154525756836,
-0.12606796622276306,
0.07271063327789307,
-0.04745033383369446,
0.08509478718042374,
-0.0954873189330101,
0.007785628084093332,
-0.06523606926202774,
0.0842304453253746,
-0.046361926943063736,
0.0830635279417038,
-0.253038227558136,
0.19317828118801117,
0.01563459075987339,
0.09170185029506683,
-0.1309133619070053,
-0.002452813321724534,
0.04427962377667427,
-0.03750985860824585,
0.18015088140964508,
-0.003877912648022175,
0.04300474748015404,
-0.10345496237277985,
-0.08364515751600266,
0.0020102127455174923,
0.06945031136274338,
-0.009492085315287113,
0.08629167079925537,
0.01328736450523138,
0.0005944445147179067,
0.001187210320495069,
0.0430290587246418,
-0.1263040453195572,
-0.13884414732456207,
0.05146166682243347,
-0.10618241876363754,
-0.04570511728525162,
-0.05237279087305069,
-0.1014813780784607,
0.027868907898664474,
0.1934639811515808,
-0.16538095474243164,
-0.0818253755569458,
-0.11812935024499893,
0.01248067058622837,
0.1019139364361763,
-0.09206543117761612,
0.008371394127607346,
-0.01678580604493618,
0.21589694917201996,
-0.05467608571052551,
-0.07363250851631165,
0.05694317817687988,
-0.059534478932619095,
-0.1377968192100525,
-0.03661947697401047,
0.11678425222635269,
0.14798744022846222,
0.09414704889059067,
0.004088176414370537,
0.00991487130522728,
0.07618521898984909,
-0.1012568473815918,
-0.02106337994337082,
0.09210420399904251,
0.1326119601726532,
0.06389341503381729,
-0.08340202271938324,
-0.04792361333966255,
-0.12415732443332672,
0.041273605078458786,
0.10557615756988525,
0.21995483338832855,
-0.06595073640346527,
0.1163625717163086,
0.20739160478115082,
-0.08256522566080093,
-0.22083495557308197,
0.01736624166369438,
0.06286340951919556,
0.07047859579324722,
0.0741053894162178,
-0.20067650079727173,
0.030995609238743782,
0.02701081708073616,
0.0032025829423218966,
-0.026369761675596237,
-0.19320009648799896,
-0.12796363234519958,
0.13820211589336395,
0.11001389473676682,
-0.05147525295615196,
-0.062129825353622437,
-0.011600933037698269,
-0.04999583214521408,
-0.07451195269823074,
0.11870250105857849,
0.005457337014377117,
0.1159818097949028,
0.020777344703674316,
-0.06447480618953705,
0.04536519572138786,
-0.08057725429534912,
0.09950577467679977,
0.01979009434580803,
0.08036086708307266,
-0.09021344780921936,
-0.058731526136398315,
0.050767119973897934,
-0.04760504886507988,
0.15523336827754974,
0.06263952702283859,
0.033392395824193954,
-0.03588981553912163,
-0.07827439904212952,
-0.08530477434396744,
0.0182789359241724,
-0.06373731046915054,
-0.08728771656751633,
-0.0578780435025692,
0.11682438850402832,
0.09273747354745865,
-0.016404714435338974,
-0.011271093972027302,
-0.050596628338098526,
0.049494609236717224,
0.13740551471710205,
0.17039746046066284,
0.012894685380160809,
-0.042476531118154526,
0.008607233874499798,
0.00912337563931942,
0.03622472658753395,
-0.06210195645689964,
0.04750311002135277,
0.09159965813159943,
0.04897494986653328,
0.15597683191299438,
0.02473508007824421,
-0.15715241432189941,
-0.037176936864852905,
0.022917727008461952,
-0.12391979992389679,
-0.1596481055021286,
0.012010429054498672,
-0.006027578841894865,
-0.16119419038295746,
-0.07623802870512009,
0.038525909185409546,
-0.04137267544865608,
-0.010594915598630905,
0.01635538972914219,
0.05054296553134918,
0.0000877302372828126,
0.17991863191127777,
0.058053694665431976,
0.05322244018316269,
-0.07936372607946396,
0.08017883449792862,
0.08898180723190308,
-0.07273633778095245,
0.04750584438443184,
0.04731735214591026,
-0.08907943218946457,
-0.0012046991614624858,
0.01849278435111046,
0.06896716356277466,
0.127924382686615,
-0.011055530048906803,
-0.04717289283871651,
-0.06441999226808548,
0.04929376021027565,
0.075383260846138,
0.018193135038018227,
0.06095937639474869,
-0.062098413705825806,
0.008932748809456825,
-0.09922350198030472,
0.0611039474606514,
0.061913542449474335,
0.0321260429918766,
0.021719258278608322,
0.19934354722499847,
0.03150826320052147,
0.052169397473335266,
-0.0240510031580925,
-0.06187664717435837,
-0.07155779004096985,
0.0007776763522997499,
-0.0611659437417984,
0.0328160859644413,
-0.11966895312070847,
-0.030834127217531204,
-0.03962133824825287,
0.014205262996256351,
0.011415091343224049,
0.01491532102227211,
-0.042359430342912674,
-0.023794785141944885,
-0.026975665241479874,
0.012238048948347569,
-0.11259134858846664,
0.00390667375177145,
0.060415346175432205,
-0.07673684507608414,
0.0876753032207489,
-0.030737632885575294,
-0.040002260357141495,
0.01808977499604225,
-0.15080921351909637,
-0.01838136464357376,
-0.011896475218236446,
0.00764623424038291,
0.008522829040884972,
-0.13765951991081238,
0.007381847128272057,
-0.03561735153198242,
-0.0452297106385231,
-0.01561965886503458,
0.0775267630815506,
-0.10113540291786194,
0.0658981129527092,
0.047999173402786255,
-0.03114907070994377,
-0.059003572911024094,
0.11916480213403702,
0.08193787187337875,
-0.033657923340797424,
0.13029588758945465,
-0.037023965269327164,
0.05084704980254173,
-0.14494699239730835,
-0.02446024678647518,
-0.011465989984571934,
0.003847826039418578,
0.05493234843015671,
-0.05752883851528168,
0.05423945561051369,
-0.006737331859767437,
0.10478915274143219,
0.007828611880540848,
-0.056549347937107086,
0.03820616751909256,
-0.031194277107715607,
0.01837231032550335,
-0.006177471950650215,
0.0734746977686882,
-0.05039985850453377,
-0.07159332185983658,
0.0367039293050766,
0.02604474499821663,
-0.006753252353519201,
0.13809646666049957,
0.26738592982292175,
0.1145094782114029,
0.07677400857210159,
-0.02144543081521988,
0.0020775371231138706,
-0.03380352631211281,
-0.08960619568824768,
-0.02220507338643074,
0.06603047996759415,
0.04673304408788681,
0.00007899408228695393,
0.08134745806455612,
0.11444655060768127,
-0.1449069231748581,
0.14770668745040894,
0.019405340775847435,
-0.09224553406238556,
-0.09642909467220306,
-0.2504918575286865,
-0.0008002162794582546,
0.08370407670736313,
-0.029821857810020447,
-0.12721925973892212,
0.013510218821465969,
0.11181192845106125,
0.047729164361953735,
-0.03452780097723007,
0.1611899733543396,
-0.08229434490203857,
-0.0861179381608963,
0.0835375040769577,
0.017852840945124626,
-0.002544101094827056,
-0.007696071173995733,
0.008471843786537647,
0.047405682504177094,
0.039373598992824554,
0.0877314880490303,
0.0447838231921196,
0.04830709844827652,
-0.0016436115838587284,
-0.013657051138579845,
-0.08863828331232071,
0.029517358168959618,
-0.02684410847723484,
0.10059095174074173,
0.24101650714874268,
0.04012444242835045,
-0.036335866898298264,
-0.00960989948362112,
0.12304265052080154,
-0.021471437066793442,
-0.0690193623304367,
-0.12830086052417755,
0.16738809645175934,
0.04167437553405762,
-0.005485293921083212,
0.0556318573653698,
-0.10764802992343903,
0.032646432518959045,
0.23761866986751556,
0.14182783663272858,
0.0011957709211856127,
0.012096733786165714,
0.008423762395977974,
0.014524565078318119,
0.027076750993728638,
0.1612141728401184,
-0.005447881296277046,
0.2128559798002243,
-0.03626790642738342,
0.1060524731874466,
-0.037359680980443954,
-0.04406345635652542,
-0.035172171890735626,
0.09453263878822327,
0.028232000768184662,
-0.009198799729347229,
-0.07897105067968369,
0.06607218086719513,
-0.021750882267951965,
-0.260026752948761,
-0.028478579595685005,
-0.056979112327098846,
-0.12965057790279388,
-0.011414061300456524,
-0.07500343769788742,
0.0667838305234909,
0.07483363151550293,
0.02779369242489338,
0.03505672886967659,
0.16206762194633484,
0.043138593435287476,
-0.03953241556882858,
-0.0860275849699974,
0.08366993814706802,
-0.03308018669486046,
0.20990166068077087,
0.023141775280237198,
0.03969908878207207,
0.09320905804634094,
0.0155209144577384,
-0.09686201065778732,
0.033003516495227814,
0.004970548674464226,
0.03956272453069687,
0.02509421668946743,
0.18521223962306976,
-0.02411201037466526,
-0.05288241431117058,
0.01025550626218319,
-0.1039607897400856,
0.057859405875205994,
-0.10493405908346176,
-0.03247814252972603,
-0.1193951740860939,
0.11030786484479904,
-0.0604960173368454,
0.1304416060447693,
0.19824650883674622,
-0.006808329839259386,
-0.005078800022602081,
-0.08136770129203796,
-0.020754951983690262,
-0.009727342054247856,
0.10144088417291641,
-0.022731496021151543,
-0.18574245274066925,
0.006502075120806694,
-0.05712493881583214,
0.025840619578957558,
-0.2963947355747223,
-0.042784433811903,
0.03296564891934395,
-0.08048302680253983,
-0.021068980917334557,
0.06506511569023132,
-0.03673930838704109,
0.0753861516714096,
-0.03602520003914833,
0.0423402301967144,
-0.004416536074131727,
0.10059640556573868,
-0.14944693446159363,
-0.04231806471943855
] |
null | null | transformers |
# ALBERT Large v2
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the second version of the large model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 24 repeating layers
- 128 embedding dimension
- 1024 hidden dimension
- 16 attention heads
- 17M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-large-v2')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-large-v2')
model = AlbertModel.from_pretrained("albert-large-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-large-v2')
model = TFAlbertModel.from_pretrained("albert-large-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-large-v2')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | fill-mask | albert/albert-large-v2 | [
"transformers",
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"1909.11942"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ALBERT Large v2
===============
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the second version of the large model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
* 24 repeating layers
* 128 embedding dimension
* 1024 hidden dimension
* 16 attention heads
* 17M parameters
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Evaluation results
------------------
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
### BibTeX entry and citation info
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] | [
75,
49,
102,
42,
135,
11
] | [
"passage: TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:### BibTeX entry and citation info"
] | [
-0.04551113396883011,
0.0802619680762291,
-0.004052036441862583,
0.07860441505908966,
0.04896535351872444,
0.009174984879791737,
0.08155245333909988,
0.04193923622369766,
-0.08074910938739777,
0.07420488446950912,
0.03317805007100105,
0.04168064892292023,
0.1137911006808281,
0.12865380942821503,
0.03562881052494049,
-0.3194960355758667,
0.04676777496933937,
-0.015500158071517944,
0.035448186099529266,
0.09928426146507263,
0.10145502537488937,
-0.10330747812986374,
0.013665885664522648,
-0.0071895238943398,
-0.03393346071243286,
-0.034216634929180145,
-0.0018901361618191004,
-0.042468078434467316,
0.07356763631105423,
0.05284520238637924,
0.1046295166015625,
0.0404527373611927,
0.05444014444947243,
-0.17392070591449738,
0.02301005832850933,
0.08772853016853333,
0.006956661585718393,
0.08186767250299454,
0.09368777275085449,
0.02413523942232132,
0.11847671866416931,
-0.0612131766974926,
0.06962253898382187,
0.04527520388364792,
-0.11396395415067673,
-0.09088940918445587,
-0.05937955155968666,
0.10509873926639557,
0.11327197402715683,
0.026418432593345642,
-0.0334581583738327,
0.06045398488640785,
-0.05818641185760498,
0.09398715943098068,
0.22296026349067688,
-0.19845816493034363,
-0.01989910751581192,
-0.012354457750916481,
-0.0070524439215660095,
0.016607169061899185,
-0.09925274550914764,
-0.05664094164967537,
0.023971661925315857,
0.030421694740653038,
0.13375677168369293,
-0.012973122298717499,
-0.003083419054746628,
-0.07122260332107544,
-0.1373298168182373,
-0.08675924688577652,
0.0611911304295063,
-0.0010342838941141963,
-0.1053277924656868,
-0.1360902041196823,
-0.03571191430091858,
-0.0576082207262516,
-0.01811579056084156,
0.007686743978410959,
0.021222004666924477,
-0.01571779139339924,
0.0731607973575592,
-0.030072787776589394,
-0.09501220285892487,
-0.054114360362291336,
-0.029698846861720085,
0.09900537878274918,
0.04432804882526398,
-0.004976477473974228,
-0.03827338665723801,
0.12907736003398895,
0.000012067925126757473,
-0.1198144406080246,
-0.03641133010387421,
-0.05129370838403702,
-0.12604333460330963,
-0.04294734448194504,
0.006550996098667383,
-0.08121354132890701,
-0.05359722301363945,
0.12144969403743744,
-0.07646219432353973,
0.052818041294813156,
-0.11000118404626846,
0.03786785528063774,
0.03273627907037735,
0.08413033932447433,
-0.09982180595397949,
0.023096779361367226,
0.012542684562504292,
0.04627145454287529,
0.030143965035676956,
0.00010260176350129768,
0.007899032905697823,
0.015061267651617527,
0.05133241042494774,
0.07356788218021393,
-0.006212256383150816,
0.07572850584983826,
-0.0733005627989769,
-0.04566649720072746,
0.117124542593956,
-0.1493610441684723,
-0.026935739442706108,
0.009401454590260983,
-0.04137833043932915,
-0.05018153786659241,
0.05190020799636841,
-0.05363049730658531,
-0.11976885795593262,
0.152625173330307,
-0.08122817426919937,
-0.03451291099190712,
-0.07300899922847748,
-0.1447543054819107,
-0.0026474616024643183,
-0.025739552453160286,
-0.07170360535383224,
-0.03028891794383526,
-0.11033158749341965,
-0.029098354279994965,
0.047077327966690063,
-0.019686510786414146,
-0.02027660608291626,
-0.03105141408741474,
-0.03543657064437866,
-0.026764025911688805,
-0.004180578049272299,
0.1106206402182579,
-0.024547459557652473,
0.06970713287591934,
-0.06840337067842484,
0.08289238810539246,
0.09605103731155396,
0.03087173029780388,
-0.09323615580797195,
0.03921728953719139,
-0.2114611268043518,
0.08727454394102097,
-0.049125902354717255,
-0.042977042496204376,
-0.08089441806077957,
-0.07186415046453476,
-0.06054092198610306,
0.03029811754822731,
0.020987944677472115,
0.16369278728961945,
-0.2119584083557129,
-0.041542358696460724,
0.32052451372146606,
-0.1367534101009369,
-0.002294054953381419,
0.12919074296951294,
-0.066414475440979,
0.03440329432487488,
0.0808856263756752,
0.08894262462854385,
-0.04917604476213455,
-0.09136492013931274,
-0.0030137980356812477,
-0.05897599831223488,
-0.0008087589521892369,
0.17744973301887512,
0.03839888423681259,
-0.052005935460329056,
-0.04630080237984657,
0.004275207407772541,
-0.05512862652540207,
-0.05079255625605583,
-0.018915673717856407,
-0.02526763267815113,
0.048335254192352295,
-0.017202435061335564,
0.03870829567313194,
0.013349601067602634,
-0.05048910155892372,
-0.03435882553458214,
-0.11198391765356064,
-0.02944340743124485,
0.06985864788293839,
-0.07916663587093353,
0.03236980736255646,
-0.051062460988759995,
-0.02647782489657402,
0.0013752580853179097,
0.0032333179842680693,
-0.19330114126205444,
0.0023915974888950586,
0.0667717233300209,
-0.06412503123283386,
0.07577772438526154,
0.018598997965455055,
0.026461223140358925,
0.08619476109743118,
-0.0623263344168663,
-0.012272304855287075,
0.008379009552299976,
-0.015970533713698387,
-0.07066856324672699,
-0.1653992235660553,
-0.057337600737810135,
-0.036461371928453445,
0.09765934944152832,
-0.1362580806016922,
0.010994399897754192,
0.007720418740063906,
0.06752710044384003,
0.05236116051673889,
-0.06864910572767258,
0.06248823180794716,
0.009777098894119263,
-0.04145250469446182,
-0.056500568985939026,
0.008342307060956955,
0.013000303879380226,
0.000765531265642494,
0.06627954542636871,
-0.20021773874759674,
-0.1504518985748291,
0.06118915230035782,
0.05632588639855385,
-0.13750065863132477,
-0.052416473627090454,
-0.06842749565839767,
-0.023510001599788666,
-0.08883948624134064,
-0.05367489159107208,
0.15213648974895477,
0.036988403648138046,
0.11421379446983337,
-0.08264148235321045,
-0.0222010500729084,
0.005905165337026119,
-0.00013674053479917347,
-0.04593511298298836,
0.0817798599600792,
0.030059203505516052,
-0.10633993148803711,
0.05813867971301079,
-0.07175382971763611,
0.012249952182173729,
0.12792889773845673,
0.01868418976664543,
-0.10436969250440598,
0.021510083228349686,
0.036857228726148605,
0.06437184661626816,
0.08481411635875702,
-0.0866444781422615,
0.009299608878791332,
0.05689283832907677,
-0.010078764520585537,
0.012822967022657394,
-0.1007397472858429,
0.05021527409553528,
0.04364950582385063,
-0.04251604527235031,
-0.02986682392656803,
-0.05711539834737778,
-0.004364591091871262,
0.1370524764060974,
0.020508522167801857,
0.010627740062773228,
-0.023448161780834198,
-0.052213042974472046,
-0.12249186635017395,
0.18323172628879547,
-0.07688234746456146,
-0.22708256542682648,
-0.1624678671360016,
0.00595437828451395,
0.018974656239151955,
0.03463640436530113,
0.01500330027192831,
-0.04002918303012848,
-0.0799270048737526,
-0.14147889614105225,
0.029127033427357674,
0.02808815985918045,
-0.020980792120099068,
-0.03103758953511715,
-0.01857895217835903,
0.014451546594500542,
-0.11679108440876007,
-0.01751423440873623,
-0.0029288313817232847,
-0.06392254680395126,
0.026546722277998924,
-0.03831832483410835,
0.08612576872110367,
0.1615145206451416,
0.0030666945967823267,
-0.0058347308076918125,
-0.056852810084819794,
0.1485251635313034,
-0.07510945200920105,
0.10289401561021805,
0.02833210676908493,
-0.07297860831022263,
0.06745003908872604,
0.14825105667114258,
0.008620106615126133,
-0.06054901331663132,
0.06958574801683426,
0.05279053747653961,
-0.04934817552566528,
-0.24355751276016235,
-0.04037445783615112,
-0.060672078281641006,
0.006786515936255455,
0.13164295256137848,
0.04232345521450043,
0.022176086902618408,
0.015615268610417843,
-0.11686506867408752,
-0.005577221978455782,
0.08297377824783325,
0.0904945507645607,
-0.09143506735563278,
-0.0037065630313009024,
0.08000586926937103,
-0.03611430898308754,
-0.023492878302931786,
0.0827535092830658,
-0.07355844229459763,
0.18493932485580444,
-0.078372061252594,
0.19803214073181152,
0.1025734692811966,
0.013377929106354713,
0.022710440680384636,
0.1603737771511078,
-0.04621545225381851,
0.029926994815468788,
-0.03562483191490173,
-0.0894179493188858,
-0.046838585287332535,
0.044352609664201736,
-0.0169331356883049,
0.0328717976808548,
-0.07372020184993744,
-0.04962136968970299,
0.014357089065015316,
0.3048410713672638,
0.06515476852655411,
-0.16994163393974304,
-0.08097605407238007,
0.021282728761434555,
-0.043450355529785156,
-0.07235035300254822,
0.0015843840083107352,
0.08782266080379486,
-0.12353434413671494,
0.0641273707151413,
-0.04253777489066124,
0.08867298811674118,
-0.09390021115541458,
-0.007145026233047247,
-0.06577666103839874,
0.06981758028268814,
-0.0638180673122406,
0.07844943553209305,
-0.28124916553497314,
0.20240053534507751,
0.016252167522907257,
0.0931069403886795,
-0.11013350635766983,
0.004730249755084515,
0.03365541622042656,
-0.016005834564566612,
0.18269823491573334,
-0.014175819233059883,
0.009036421775817871,
-0.11136315762996674,
-0.07097841054201126,
0.0013405801728367805,
0.06395642459392548,
-0.034074172377586365,
0.09197209030389786,
0.017331670969724655,
-0.008519270457327366,
-0.008067474700510502,
0.015916163101792336,
-0.10407818853855133,
-0.14817914366722107,
0.03272484242916107,
-0.08574634045362473,
-0.022277770563960075,
-0.05994517728686333,
-0.0889473482966423,
0.050602301955223083,
0.17419710755348206,
-0.18633034825325012,
-0.08467065542936325,
-0.10808050632476807,
0.0011428407160565257,
0.09167513996362686,
-0.08860062807798386,
0.00850608479231596,
-0.019898271188139915,
0.1927952617406845,
-0.05684422329068184,
-0.059605639427900314,
0.053057555109262466,
-0.08382179588079453,
-0.15265759825706482,
-0.06650450080633163,
0.1388421207666397,
0.1436096876859665,
0.09300093352794647,
0.0009567360393702984,
0.02544274553656578,
0.08190728724002838,
-0.08493033051490784,
-0.02715878374874592,
0.08520854264497757,
0.14782747626304626,
0.07276459783315659,
-0.12102331966161728,
-0.08128603547811508,
-0.12396391481161118,
0.0011492915218695998,
0.07215133309364319,
0.22852690517902374,
-0.0567997470498085,
0.11775059998035431,
0.20179185271263123,
-0.10196883231401443,
-0.20333892107009888,
-0.002496772911399603,
0.06797656416893005,
0.05589804798364639,
0.06793927401304245,
-0.1947418600320816,
0.021474214270710945,
0.04896228760480881,
-0.006246006581932306,
0.001730871619656682,
-0.1846364289522171,
-0.13385507464408875,
0.1354587823152542,
0.1011636033654213,
-0.052046798169612885,
-0.09381841868162155,
-0.02409476973116398,
-0.03622635081410408,
-0.06715133786201477,
0.12795832753181458,
0.010278494097292423,
0.09993913769721985,
0.025183096528053284,
-0.07423185557126999,
0.03980276733636856,
-0.07442827522754669,
0.10264360904693604,
0.024624386802315712,
0.06385105103254318,
-0.075246661901474,
-0.05981644615530968,
0.023790322244167328,
-0.05108841881155968,
0.1316027194261551,
0.03301509469747543,
0.032546915113925934,
-0.03758623078465462,
-0.06733056157827377,
-0.08242039382457733,
0.02365288697183132,
-0.07107418030500412,
-0.0781676173210144,
-0.050578679889440536,
0.11003245413303375,
0.0919700637459755,
-0.01257054228335619,
-0.010734300129115582,
-0.06212538480758667,
0.07343713939189911,
0.1750413477420807,
0.1353147327899933,
0.0037028761580586433,
-0.0554620735347271,
0.004320521838963032,
0.002219650661572814,
0.04902629554271698,
-0.05281633511185646,
0.048523858189582825,
0.09053952991962433,
0.04956485331058502,
0.18464642763137817,
0.023346450179815292,
-0.14983369410037994,
-0.03249488025903702,
0.02302735112607479,
-0.1360074132680893,
-0.15855303406715393,
0.015355031006038189,
0.006269182078540325,
-0.14924386143684387,
-0.05710592493414879,
0.035547249019145966,
-0.04566509649157524,
-0.01046594325453043,
0.019313227385282516,
0.07044415175914764,
0.007301634177565575,
0.1914786398410797,
0.04179583117365837,
0.06743974983692169,
-0.06301353871822357,
0.06693664193153381,
0.10514621436595917,
-0.0881747305393219,
0.03057245723903179,
0.07537911832332611,
-0.08424431830644608,
-0.008858607150614262,
0.014752889052033424,
0.046715207397937775,
0.12174578756093979,
-0.02732527069747448,
-0.0533108226954937,
-0.054432887583971024,
0.05014793947339058,
0.09338617324829102,
0.018099572509527206,
0.07213922590017319,
-0.04217717424035072,
0.014004318974912167,
-0.08820011466741562,
0.0720832347869873,
0.07386557012796402,
0.04818039759993553,
0.037619609385728836,
0.17464371025562286,
0.03043859452009201,
0.030228421092033386,
-0.0207800455391407,
-0.049832019954919815,
-0.08802708983421326,
0.010456868447363377,
-0.0534634031355381,
0.05118215084075928,
-0.13045485317707062,
-0.034635450690984726,
-0.025420278310775757,
0.01506165973842144,
0.027372775599360466,
0.019263070076704025,
-0.03671112284064293,
-0.011346404440701008,
-0.0413554385304451,
0.043389443308115005,
-0.13172979652881622,
-0.00032270903466269374,
0.06430304795503616,
-0.08744421601295471,
0.0832020565867424,
-0.038658685982227325,
-0.03783983364701271,
0.008641819469630718,
-0.13428178429603577,
0.007338365539908409,
-0.006319608073681593,
0.0005034542409703135,
0.009500081650912762,
-0.12048576772212982,
0.007337742485105991,
-0.040052056312561035,
-0.03318877890706062,
-0.01879994384944439,
0.07153020054101944,
-0.1025463417172432,
0.07143515348434448,
0.037066083401441574,
-0.04036115109920502,
-0.054185960441827774,
0.11926255375146866,
0.06190148741006851,
-0.02851765789091587,
0.13255515694618225,
-0.05637795850634575,
0.05297723412513733,
-0.14606192708015442,
-0.016445554792881012,
-0.002349301241338253,
-0.00126402429305017,
0.04931701719760895,
-0.03873996064066887,
0.05206925421953201,
-0.01727050356566906,
0.0916282907128334,
0.014266873709857464,
-0.04801461845636368,
0.04142756015062332,
-0.051602449268102646,
0.039884548634290695,
0.005196151323616505,
0.05647991597652435,
-0.04489238187670708,
-0.0753636583685875,
0.0053891828283667564,
0.005118631292134523,
-0.0014845625264570117,
0.12870267033576965,
0.252448707818985,
0.10178964585065842,
0.05325307324528694,
-0.018001435324549675,
0.0008368203998543322,
-0.045870065689086914,
-0.08713530004024506,
-0.03476057946681976,
0.06981977075338364,
0.04823338985443115,
0.00007358034781645983,
0.12155977636575699,
0.13518661260604858,
-0.16384628415107727,
0.13542227447032928,
0.018142355605959892,
-0.093017578125,
-0.0936674028635025,
-0.2502152919769287,
-0.00814267247915268,
0.08741414546966553,
-0.030260132625699043,
-0.12051030248403549,
0.018744871020317078,
0.11829569935798645,
0.039956409484148026,
-0.01608285866677761,
0.14628323912620544,
-0.03726310655474663,
-0.07647430151700974,
0.07531041651964188,
0.028622157871723175,
-0.0004760088922921568,
-0.012956739403307438,
-0.007037997245788574,
0.0358579084277153,
0.024428214877843857,
0.071158267557621,
0.055658258497714996,
0.04809136688709259,
-0.002635675249621272,
-0.008838345296680927,
-0.07253988832235336,
0.03505013510584831,
-0.02873380109667778,
0.0816233828663826,
0.21838073432445526,
0.05007997527718544,
-0.036477230489254,
0.00830045435577631,
0.13513843715190887,
-0.027692805975675583,
-0.06272348016500473,
-0.13325534760951996,
0.20981940627098083,
0.032328277826309204,
0.0006819753325544298,
0.05353682488203049,
-0.11086081713438034,
0.020852912217378616,
0.22756563127040863,
0.1410655677318573,
0.005787260364741087,
0.012806277722120285,
-0.0010355091653764248,
0.018360458314418793,
0.030239000916481018,
0.13797245919704437,
0.002223436953499913,
0.22499892115592957,
-0.04170384630560875,
0.0972900241613388,
-0.028289444744586945,
-0.04071468859910965,
-0.04731491953134537,
0.09834825992584229,
0.026345165446400642,
-0.005267051514238119,
-0.07278679311275482,
0.05974956601858139,
-0.03308650851249695,
-0.27807295322418213,
-0.029511580243706703,
-0.05212041735649109,
-0.13074246048927307,
-0.0164866354316473,
-0.062326669692993164,
0.049331020563840866,
0.06430887430906296,
0.02164088748395443,
0.0173417367041111,
0.15032081305980682,
0.03167315199971199,
-0.033419664949178696,
-0.08976985514163971,
0.06881919503211975,
-0.034935735166072845,
0.22980856895446777,
0.02005615644156933,
0.0502108559012413,
0.0780274048447609,
0.009408356621861458,
-0.08604691177606583,
0.043955374509096146,
0.02000560238957405,
0.03242367506027222,
0.03145894780755043,
0.17191502451896667,
-0.03260815516114235,
-0.033708956092596054,
0.018627744168043137,
-0.10144589096307755,
0.06115202233195305,
-0.09310440719127655,
-0.05347374826669693,
-0.11286240816116333,
0.0944034680724144,
-0.059378523379564285,
0.11654721945524216,
0.1900123953819275,
-0.0081173125654459,
0.0028547272086143494,
-0.06998814642429352,
-0.01693693734705448,
0.006468089297413826,
0.11123216897249222,
-0.0060503073036670685,
-0.18469098210334778,
0.007994433864951134,
-0.07098773866891861,
0.025127286091446877,
-0.2984900176525116,
-0.03878678381443024,
0.02528376691043377,
-0.06789012998342514,
-0.036447178572416306,
0.07038436830043793,
-0.010536232963204384,
0.06133021414279938,
-0.0446830615401268,
-0.003058419795706868,
0.007745937444269657,
0.10830104351043701,
-0.14102803170681,
-0.03994832560420036
] |
null | null | transformers |
# ALBERT XLarge v1
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the xlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 24 repeating layers
- 128 embedding dimension
- 2048 hidden dimension
- 16 attention heads
- 58M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xlarge-v1')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v1')
model = AlbertModel.from_pretrained("albert-xlarge-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v1')
model = TFAlbertModel.from_pretrained("albert-xlarge-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xlarge-v1')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | fill-mask | albert/albert-xlarge-v1 | [
"transformers",
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"1909.11942"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ALBERT XLarge v1
================
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the xlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
* 24 repeating layers
* 128 embedding dimension
* 2048 hidden dimension
* 16 attention heads
* 58M parameters
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Evaluation results
------------------
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
### BibTeX entry and citation info
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:",
"### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:",
"### BibTeX entry and citation info"
] | [
75,
49,
102,
42,
135,
11
] | [
"passage: TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:### Training\n\n\nThe ALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:### BibTeX entry and citation info"
] | [
-0.04551113396883011,
0.0802619680762291,
-0.004052036441862583,
0.07860441505908966,
0.04896535351872444,
0.009174984879791737,
0.08155245333909988,
0.04193923622369766,
-0.08074910938739777,
0.07420488446950912,
0.03317805007100105,
0.04168064892292023,
0.1137911006808281,
0.12865380942821503,
0.03562881052494049,
-0.3194960355758667,
0.04676777496933937,
-0.015500158071517944,
0.035448186099529266,
0.09928426146507263,
0.10145502537488937,
-0.10330747812986374,
0.013665885664522648,
-0.0071895238943398,
-0.03393346071243286,
-0.034216634929180145,
-0.0018901361618191004,
-0.042468078434467316,
0.07356763631105423,
0.05284520238637924,
0.1046295166015625,
0.0404527373611927,
0.05444014444947243,
-0.17392070591449738,
0.02301005832850933,
0.08772853016853333,
0.006956661585718393,
0.08186767250299454,
0.09368777275085449,
0.02413523942232132,
0.11847671866416931,
-0.0612131766974926,
0.06962253898382187,
0.04527520388364792,
-0.11396395415067673,
-0.09088940918445587,
-0.05937955155968666,
0.10509873926639557,
0.11327197402715683,
0.026418432593345642,
-0.0334581583738327,
0.06045398488640785,
-0.05818641185760498,
0.09398715943098068,
0.22296026349067688,
-0.19845816493034363,
-0.01989910751581192,
-0.012354457750916481,
-0.0070524439215660095,
0.016607169061899185,
-0.09925274550914764,
-0.05664094164967537,
0.023971661925315857,
0.030421694740653038,
0.13375677168369293,
-0.012973122298717499,
-0.003083419054746628,
-0.07122260332107544,
-0.1373298168182373,
-0.08675924688577652,
0.0611911304295063,
-0.0010342838941141963,
-0.1053277924656868,
-0.1360902041196823,
-0.03571191430091858,
-0.0576082207262516,
-0.01811579056084156,
0.007686743978410959,
0.021222004666924477,
-0.01571779139339924,
0.0731607973575592,
-0.030072787776589394,
-0.09501220285892487,
-0.054114360362291336,
-0.029698846861720085,
0.09900537878274918,
0.04432804882526398,
-0.004976477473974228,
-0.03827338665723801,
0.12907736003398895,
0.000012067925126757473,
-0.1198144406080246,
-0.03641133010387421,
-0.05129370838403702,
-0.12604333460330963,
-0.04294734448194504,
0.006550996098667383,
-0.08121354132890701,
-0.05359722301363945,
0.12144969403743744,
-0.07646219432353973,
0.052818041294813156,
-0.11000118404626846,
0.03786785528063774,
0.03273627907037735,
0.08413033932447433,
-0.09982180595397949,
0.023096779361367226,
0.012542684562504292,
0.04627145454287529,
0.030143965035676956,
0.00010260176350129768,
0.007899032905697823,
0.015061267651617527,
0.05133241042494774,
0.07356788218021393,
-0.006212256383150816,
0.07572850584983826,
-0.0733005627989769,
-0.04566649720072746,
0.117124542593956,
-0.1493610441684723,
-0.026935739442706108,
0.009401454590260983,
-0.04137833043932915,
-0.05018153786659241,
0.05190020799636841,
-0.05363049730658531,
-0.11976885795593262,
0.152625173330307,
-0.08122817426919937,
-0.03451291099190712,
-0.07300899922847748,
-0.1447543054819107,
-0.0026474616024643183,
-0.025739552453160286,
-0.07170360535383224,
-0.03028891794383526,
-0.11033158749341965,
-0.029098354279994965,
0.047077327966690063,
-0.019686510786414146,
-0.02027660608291626,
-0.03105141408741474,
-0.03543657064437866,
-0.026764025911688805,
-0.004180578049272299,
0.1106206402182579,
-0.024547459557652473,
0.06970713287591934,
-0.06840337067842484,
0.08289238810539246,
0.09605103731155396,
0.03087173029780388,
-0.09323615580797195,
0.03921728953719139,
-0.2114611268043518,
0.08727454394102097,
-0.049125902354717255,
-0.042977042496204376,
-0.08089441806077957,
-0.07186415046453476,
-0.06054092198610306,
0.03029811754822731,
0.020987944677472115,
0.16369278728961945,
-0.2119584083557129,
-0.041542358696460724,
0.32052451372146606,
-0.1367534101009369,
-0.002294054953381419,
0.12919074296951294,
-0.066414475440979,
0.03440329432487488,
0.0808856263756752,
0.08894262462854385,
-0.04917604476213455,
-0.09136492013931274,
-0.0030137980356812477,
-0.05897599831223488,
-0.0008087589521892369,
0.17744973301887512,
0.03839888423681259,
-0.052005935460329056,
-0.04630080237984657,
0.004275207407772541,
-0.05512862652540207,
-0.05079255625605583,
-0.018915673717856407,
-0.02526763267815113,
0.048335254192352295,
-0.017202435061335564,
0.03870829567313194,
0.013349601067602634,
-0.05048910155892372,
-0.03435882553458214,
-0.11198391765356064,
-0.02944340743124485,
0.06985864788293839,
-0.07916663587093353,
0.03236980736255646,
-0.051062460988759995,
-0.02647782489657402,
0.0013752580853179097,
0.0032333179842680693,
-0.19330114126205444,
0.0023915974888950586,
0.0667717233300209,
-0.06412503123283386,
0.07577772438526154,
0.018598997965455055,
0.026461223140358925,
0.08619476109743118,
-0.0623263344168663,
-0.012272304855287075,
0.008379009552299976,
-0.015970533713698387,
-0.07066856324672699,
-0.1653992235660553,
-0.057337600737810135,
-0.036461371928453445,
0.09765934944152832,
-0.1362580806016922,
0.010994399897754192,
0.007720418740063906,
0.06752710044384003,
0.05236116051673889,
-0.06864910572767258,
0.06248823180794716,
0.009777098894119263,
-0.04145250469446182,
-0.056500568985939026,
0.008342307060956955,
0.013000303879380226,
0.000765531265642494,
0.06627954542636871,
-0.20021773874759674,
-0.1504518985748291,
0.06118915230035782,
0.05632588639855385,
-0.13750065863132477,
-0.052416473627090454,
-0.06842749565839767,
-0.023510001599788666,
-0.08883948624134064,
-0.05367489159107208,
0.15213648974895477,
0.036988403648138046,
0.11421379446983337,
-0.08264148235321045,
-0.0222010500729084,
0.005905165337026119,
-0.00013674053479917347,
-0.04593511298298836,
0.0817798599600792,
0.030059203505516052,
-0.10633993148803711,
0.05813867971301079,
-0.07175382971763611,
0.012249952182173729,
0.12792889773845673,
0.01868418976664543,
-0.10436969250440598,
0.021510083228349686,
0.036857228726148605,
0.06437184661626816,
0.08481411635875702,
-0.0866444781422615,
0.009299608878791332,
0.05689283832907677,
-0.010078764520585537,
0.012822967022657394,
-0.1007397472858429,
0.05021527409553528,
0.04364950582385063,
-0.04251604527235031,
-0.02986682392656803,
-0.05711539834737778,
-0.004364591091871262,
0.1370524764060974,
0.020508522167801857,
0.010627740062773228,
-0.023448161780834198,
-0.052213042974472046,
-0.12249186635017395,
0.18323172628879547,
-0.07688234746456146,
-0.22708256542682648,
-0.1624678671360016,
0.00595437828451395,
0.018974656239151955,
0.03463640436530113,
0.01500330027192831,
-0.04002918303012848,
-0.0799270048737526,
-0.14147889614105225,
0.029127033427357674,
0.02808815985918045,
-0.020980792120099068,
-0.03103758953511715,
-0.01857895217835903,
0.014451546594500542,
-0.11679108440876007,
-0.01751423440873623,
-0.0029288313817232847,
-0.06392254680395126,
0.026546722277998924,
-0.03831832483410835,
0.08612576872110367,
0.1615145206451416,
0.0030666945967823267,
-0.0058347308076918125,
-0.056852810084819794,
0.1485251635313034,
-0.07510945200920105,
0.10289401561021805,
0.02833210676908493,
-0.07297860831022263,
0.06745003908872604,
0.14825105667114258,
0.008620106615126133,
-0.06054901331663132,
0.06958574801683426,
0.05279053747653961,
-0.04934817552566528,
-0.24355751276016235,
-0.04037445783615112,
-0.060672078281641006,
0.006786515936255455,
0.13164295256137848,
0.04232345521450043,
0.022176086902618408,
0.015615268610417843,
-0.11686506867408752,
-0.005577221978455782,
0.08297377824783325,
0.0904945507645607,
-0.09143506735563278,
-0.0037065630313009024,
0.08000586926937103,
-0.03611430898308754,
-0.023492878302931786,
0.0827535092830658,
-0.07355844229459763,
0.18493932485580444,
-0.078372061252594,
0.19803214073181152,
0.1025734692811966,
0.013377929106354713,
0.022710440680384636,
0.1603737771511078,
-0.04621545225381851,
0.029926994815468788,
-0.03562483191490173,
-0.0894179493188858,
-0.046838585287332535,
0.044352609664201736,
-0.0169331356883049,
0.0328717976808548,
-0.07372020184993744,
-0.04962136968970299,
0.014357089065015316,
0.3048410713672638,
0.06515476852655411,
-0.16994163393974304,
-0.08097605407238007,
0.021282728761434555,
-0.043450355529785156,
-0.07235035300254822,
0.0015843840083107352,
0.08782266080379486,
-0.12353434413671494,
0.0641273707151413,
-0.04253777489066124,
0.08867298811674118,
-0.09390021115541458,
-0.007145026233047247,
-0.06577666103839874,
0.06981758028268814,
-0.0638180673122406,
0.07844943553209305,
-0.28124916553497314,
0.20240053534507751,
0.016252167522907257,
0.0931069403886795,
-0.11013350635766983,
0.004730249755084515,
0.03365541622042656,
-0.016005834564566612,
0.18269823491573334,
-0.014175819233059883,
0.009036421775817871,
-0.11136315762996674,
-0.07097841054201126,
0.0013405801728367805,
0.06395642459392548,
-0.034074172377586365,
0.09197209030389786,
0.017331670969724655,
-0.008519270457327366,
-0.008067474700510502,
0.015916163101792336,
-0.10407818853855133,
-0.14817914366722107,
0.03272484242916107,
-0.08574634045362473,
-0.022277770563960075,
-0.05994517728686333,
-0.0889473482966423,
0.050602301955223083,
0.17419710755348206,
-0.18633034825325012,
-0.08467065542936325,
-0.10808050632476807,
0.0011428407160565257,
0.09167513996362686,
-0.08860062807798386,
0.00850608479231596,
-0.019898271188139915,
0.1927952617406845,
-0.05684422329068184,
-0.059605639427900314,
0.053057555109262466,
-0.08382179588079453,
-0.15265759825706482,
-0.06650450080633163,
0.1388421207666397,
0.1436096876859665,
0.09300093352794647,
0.0009567360393702984,
0.02544274553656578,
0.08190728724002838,
-0.08493033051490784,
-0.02715878374874592,
0.08520854264497757,
0.14782747626304626,
0.07276459783315659,
-0.12102331966161728,
-0.08128603547811508,
-0.12396391481161118,
0.0011492915218695998,
0.07215133309364319,
0.22852690517902374,
-0.0567997470498085,
0.11775059998035431,
0.20179185271263123,
-0.10196883231401443,
-0.20333892107009888,
-0.002496772911399603,
0.06797656416893005,
0.05589804798364639,
0.06793927401304245,
-0.1947418600320816,
0.021474214270710945,
0.04896228760480881,
-0.006246006581932306,
0.001730871619656682,
-0.1846364289522171,
-0.13385507464408875,
0.1354587823152542,
0.1011636033654213,
-0.052046798169612885,
-0.09381841868162155,
-0.02409476973116398,
-0.03622635081410408,
-0.06715133786201477,
0.12795832753181458,
0.010278494097292423,
0.09993913769721985,
0.025183096528053284,
-0.07423185557126999,
0.03980276733636856,
-0.07442827522754669,
0.10264360904693604,
0.024624386802315712,
0.06385105103254318,
-0.075246661901474,
-0.05981644615530968,
0.023790322244167328,
-0.05108841881155968,
0.1316027194261551,
0.03301509469747543,
0.032546915113925934,
-0.03758623078465462,
-0.06733056157827377,
-0.08242039382457733,
0.02365288697183132,
-0.07107418030500412,
-0.0781676173210144,
-0.050578679889440536,
0.11003245413303375,
0.0919700637459755,
-0.01257054228335619,
-0.010734300129115582,
-0.06212538480758667,
0.07343713939189911,
0.1750413477420807,
0.1353147327899933,
0.0037028761580586433,
-0.0554620735347271,
0.004320521838963032,
0.002219650661572814,
0.04902629554271698,
-0.05281633511185646,
0.048523858189582825,
0.09053952991962433,
0.04956485331058502,
0.18464642763137817,
0.023346450179815292,
-0.14983369410037994,
-0.03249488025903702,
0.02302735112607479,
-0.1360074132680893,
-0.15855303406715393,
0.015355031006038189,
0.006269182078540325,
-0.14924386143684387,
-0.05710592493414879,
0.035547249019145966,
-0.04566509649157524,
-0.01046594325453043,
0.019313227385282516,
0.07044415175914764,
0.007301634177565575,
0.1914786398410797,
0.04179583117365837,
0.06743974983692169,
-0.06301353871822357,
0.06693664193153381,
0.10514621436595917,
-0.0881747305393219,
0.03057245723903179,
0.07537911832332611,
-0.08424431830644608,
-0.008858607150614262,
0.014752889052033424,
0.046715207397937775,
0.12174578756093979,
-0.02732527069747448,
-0.0533108226954937,
-0.054432887583971024,
0.05014793947339058,
0.09338617324829102,
0.018099572509527206,
0.07213922590017319,
-0.04217717424035072,
0.014004318974912167,
-0.08820011466741562,
0.0720832347869873,
0.07386557012796402,
0.04818039759993553,
0.037619609385728836,
0.17464371025562286,
0.03043859452009201,
0.030228421092033386,
-0.0207800455391407,
-0.049832019954919815,
-0.08802708983421326,
0.010456868447363377,
-0.0534634031355381,
0.05118215084075928,
-0.13045485317707062,
-0.034635450690984726,
-0.025420278310775757,
0.01506165973842144,
0.027372775599360466,
0.019263070076704025,
-0.03671112284064293,
-0.011346404440701008,
-0.0413554385304451,
0.043389443308115005,
-0.13172979652881622,
-0.00032270903466269374,
0.06430304795503616,
-0.08744421601295471,
0.0832020565867424,
-0.038658685982227325,
-0.03783983364701271,
0.008641819469630718,
-0.13428178429603577,
0.007338365539908409,
-0.006319608073681593,
0.0005034542409703135,
0.009500081650912762,
-0.12048576772212982,
0.007337742485105991,
-0.040052056312561035,
-0.03318877890706062,
-0.01879994384944439,
0.07153020054101944,
-0.1025463417172432,
0.07143515348434448,
0.037066083401441574,
-0.04036115109920502,
-0.054185960441827774,
0.11926255375146866,
0.06190148741006851,
-0.02851765789091587,
0.13255515694618225,
-0.05637795850634575,
0.05297723412513733,
-0.14606192708015442,
-0.016445554792881012,
-0.002349301241338253,
-0.00126402429305017,
0.04931701719760895,
-0.03873996064066887,
0.05206925421953201,
-0.01727050356566906,
0.0916282907128334,
0.014266873709857464,
-0.04801461845636368,
0.04142756015062332,
-0.051602449268102646,
0.039884548634290695,
0.005196151323616505,
0.05647991597652435,
-0.04489238187670708,
-0.0753636583685875,
0.0053891828283667564,
0.005118631292134523,
-0.0014845625264570117,
0.12870267033576965,
0.252448707818985,
0.10178964585065842,
0.05325307324528694,
-0.018001435324549675,
0.0008368203998543322,
-0.045870065689086914,
-0.08713530004024506,
-0.03476057946681976,
0.06981977075338364,
0.04823338985443115,
0.00007358034781645983,
0.12155977636575699,
0.13518661260604858,
-0.16384628415107727,
0.13542227447032928,
0.018142355605959892,
-0.093017578125,
-0.0936674028635025,
-0.2502152919769287,
-0.00814267247915268,
0.08741414546966553,
-0.030260132625699043,
-0.12051030248403549,
0.018744871020317078,
0.11829569935798645,
0.039956409484148026,
-0.01608285866677761,
0.14628323912620544,
-0.03726310655474663,
-0.07647430151700974,
0.07531041651964188,
0.028622157871723175,
-0.0004760088922921568,
-0.012956739403307438,
-0.007037997245788574,
0.0358579084277153,
0.024428214877843857,
0.071158267557621,
0.055658258497714996,
0.04809136688709259,
-0.002635675249621272,
-0.008838345296680927,
-0.07253988832235336,
0.03505013510584831,
-0.02873380109667778,
0.0816233828663826,
0.21838073432445526,
0.05007997527718544,
-0.036477230489254,
0.00830045435577631,
0.13513843715190887,
-0.027692805975675583,
-0.06272348016500473,
-0.13325534760951996,
0.20981940627098083,
0.032328277826309204,
0.0006819753325544298,
0.05353682488203049,
-0.11086081713438034,
0.020852912217378616,
0.22756563127040863,
0.1410655677318573,
0.005787260364741087,
0.012806277722120285,
-0.0010355091653764248,
0.018360458314418793,
0.030239000916481018,
0.13797245919704437,
0.002223436953499913,
0.22499892115592957,
-0.04170384630560875,
0.0972900241613388,
-0.028289444744586945,
-0.04071468859910965,
-0.04731491953134537,
0.09834825992584229,
0.026345165446400642,
-0.005267051514238119,
-0.07278679311275482,
0.05974956601858139,
-0.03308650851249695,
-0.27807295322418213,
-0.029511580243706703,
-0.05212041735649109,
-0.13074246048927307,
-0.0164866354316473,
-0.062326669692993164,
0.049331020563840866,
0.06430887430906296,
0.02164088748395443,
0.0173417367041111,
0.15032081305980682,
0.03167315199971199,
-0.033419664949178696,
-0.08976985514163971,
0.06881919503211975,
-0.034935735166072845,
0.22980856895446777,
0.02005615644156933,
0.0502108559012413,
0.0780274048447609,
0.009408356621861458,
-0.08604691177606583,
0.043955374509096146,
0.02000560238957405,
0.03242367506027222,
0.03145894780755043,
0.17191502451896667,
-0.03260815516114235,
-0.033708956092596054,
0.018627744168043137,
-0.10144589096307755,
0.06115202233195305,
-0.09310440719127655,
-0.05347374826669693,
-0.11286240816116333,
0.0944034680724144,
-0.059378523379564285,
0.11654721945524216,
0.1900123953819275,
-0.0081173125654459,
0.0028547272086143494,
-0.06998814642429352,
-0.01693693734705448,
0.006468089297413826,
0.11123216897249222,
-0.0060503073036670685,
-0.18469098210334778,
0.007994433864951134,
-0.07098773866891861,
0.025127286091446877,
-0.2984900176525116,
-0.03878678381443024,
0.02528376691043377,
-0.06789012998342514,
-0.036447178572416306,
0.07038436830043793,
-0.010536232963204384,
0.06133021414279938,
-0.0446830615401268,
-0.003058419795706868,
0.007745937444269657,
0.10830104351043701,
-0.14102803170681,
-0.03994832560420036
] |
null | null | transformers | "\n# ALBERT XLarge v2\n\nPretrained model on English language using a masked language modeling (MLM)(...TRUNCATED) | {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | fill-mask | albert/albert-xlarge-v2 | ["transformers","pytorch","tf","albert","fill-mask","en","dataset:bookcorpus","dataset:wikipedia","a(...TRUNCATED) | 2022-03-02T23:29:04+00:00 | [
"1909.11942"
] | [
"en"
] | "TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arx(...TRUNCATED) | "ALBERT XLarge v2\n================\n\n\nPretrained model on English language using a masked languag(...TRUNCATED) | ["### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\(...TRUNCATED) | ["TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #ar(...TRUNCATED) | [
70,
49,
102,
42,
135,
11
] | ["passage: TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wiki(...TRUNCATED) | [-0.05667823925614357,0.08347521722316742,-0.003313080407679081,0.07908394187688828,0.05151635780930(...TRUNCATED) |
null | null | transformers | "\n# ALBERT XXLarge v1\n\nPretrained model on English language using a masked language modeling (MLM(...TRUNCATED) | {"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]} | fill-mask | albert/albert-xxlarge-v1 | ["transformers","pytorch","tf","albert","fill-mask","en","dataset:bookcorpus","dataset:wikipedia","a(...TRUNCATED) | 2022-03-02T23:29:04+00:00 | [
"1909.11942"
] | [
"en"
] | "TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arx(...TRUNCATED) | "ALBERT XXLarge v1\n=================\n\n\nPretrained model on English language using a masked langu(...TRUNCATED) | ["### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\(...TRUNCATED) | ["TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #ar(...TRUNCATED) | [
70,
49,
102,
42,
135,
11
] | ["passage: TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wiki(...TRUNCATED) | [-0.05667823925614357,0.08347521722316742,-0.003313080407679081,0.07908394187688828,0.05151635780930(...TRUNCATED) |
null | null | transformers | "\n# ALBERT XXLarge v2\n\nPretrained model on English language using a masked language modeling (MLM(...TRUNCATED) | "{\"language\": \"en\", \"license\": \"apache-2.0\", \"tags\": [\"exbert\"], \"datasets\": [\"bookco(...TRUNCATED) | fill-mask | albert/albert-xxlarge-v2 | ["transformers","pytorch","tf","rust","safetensors","albert","fill-mask","exbert","en","dataset:book(...TRUNCATED) | 2022-03-02T23:29:04+00:00 | [
"1909.11942"
] | [
"en"
] | "TAGS\n#transformers #pytorch #tf #rust #safetensors #albert #fill-mask #exbert #en #dataset-bookcor(...TRUNCATED) | "ALBERT XXLarge v2\n=================\n\n\nPretrained model on English language using a masked langu(...TRUNCATED) | ["### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\(...TRUNCATED) | ["TAGS\n#transformers #pytorch #tf #rust #safetensors #albert #fill-mask #exbert #en #dataset-bookco(...TRUNCATED) | [
84,
49,
102,
42,
135,
30
] | ["passage: TAGS\n#transformers #pytorch #tf #rust #safetensors #albert #fill-mask #exbert #en #datas(...TRUNCATED) | [-0.05788842961192131,0.1143597811460495,-0.0035046867560595274,0.08419948816299438,0.05134695395827(...TRUNCATED) |
null | null | transformers | "\n# BERT base model (cased)\n\nPretrained model on English language using a masked language modelin(...TRUNCATED) | "{\"language\": \"en\", \"license\": \"apache-2.0\", \"tags\": [\"exbert\"], \"datasets\": [\"bookco(...TRUNCATED) | fill-mask | google-bert/bert-base-cased | ["transformers","pytorch","tf","jax","safetensors","bert","fill-mask","exbert","en","dataset:bookcor(...TRUNCATED) | 2022-03-02T23:29:04+00:00 | [
"1810.04805"
] | [
"en"
] | "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #exbert #en #dataset-bookcorpus(...TRUNCATED) | "BERT base model (cased)\n=======================\n\n\nPretrained model on English language using a (...TRUNCATED) | ["### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\(...TRUNCATED) | ["TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #exbert #en #dataset-bookcorpu(...TRUNCATED) | [
85,
49,
101,
218,
163,
30
] | ["passage: TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #exbert #en #dataset-(...TRUNCATED) | [-0.022694578394293785,0.08986902981996536,-0.006014915183186531,0.04828432947397232,-0.008334090001(...TRUNCATED) |
null | null | transformers | "\n# Bert-base-chinese\n\n## Table of Contents\n- [Model Details](#model-details)\n- [Uses](#uses)\n(...TRUNCATED) | {"language": "zh"} | fill-mask | google-bert/bert-base-chinese | ["transformers","pytorch","tf","jax","safetensors","bert","fill-mask","zh","arxiv:1810.04805","autot(...TRUNCATED) | 2022-03-02T23:29:04+00:00 | [
"1810.04805"
] | [
"zh"
] | "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #zh #arxiv-1810.04805 #autotrai(...TRUNCATED) | "\n# Bert-base-chinese\n\n## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Bi(...TRUNCATED) | ["# Bert-base-chinese","## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Bias(...TRUNCATED) | ["TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #zh #arxiv-1810.04805 #autotra(...TRUNCATED) | [
62,
7,
35,
3,
93,
10,
3,
15,
85,
2,
32,
4,
3,
3,
9
] | ["passage: TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #zh #arxiv-1810.04805(...TRUNCATED) | [-0.036715470254421234,0.161798894405365,-0.0009507219074293971,0.015533071011304855,0.0713464841246(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 48