pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation | transformers |
# Peppa Pig DialoGPT Model | {"tags": ["conversational"]} | Eagle3ye/DialoGPT-small-PeppaPig | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Peppa Pig DialoGPT Model | [
"# Peppa Pig DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Peppa Pig DialoGPT Model"
] |
text-classification | transformers | ## Bert-base-uncased for Android-Ios Question Classification
**Code**: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/EastHShin/Android-Ios-Classification-Workspace)
<br>
**Android-Ios-Classification DEMO**: [Ainize Endpoint](https://main-android-ios-classification-east-h-shin.endpoint.ainize.ai/)
<br>
**Demo web Code**: [Github](https://github.com/EastHShin/Android-Ios-Classification)
<br>
**Android-Ios-Classification API**: [Ainize API](https://ainize.ai/EastHShin/Android-Ios-Classification)
<br>
<br>
## Overview
**Language model**: bert-base-cased
<br>
**Language**: English
<br>
**Training data**: Question classification Android-Ios dataset from [Kaggle](https://www.kaggle.com/xhlulu/question-classification-android-or-ios)
## Usage
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_path = "EasthShin/Android_Ios_Classification"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
classifier = pipeline('text-classification', model=model_path, tokenizer=tokenizer)
question = "I bought goodnote in Appstore"
result = dict()
result[0] = classifier(question)[0]
``` | {} | EasthShin/Android_Ios_Classification | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #has_space #region-us
| ## Bert-base-uncased for Android-Ios Question Classification
Code: See Ainize Workspace
<br>
Android-Ios-Classification DEMO: Ainize Endpoint
<br>
Demo web Code: Github
<br>
Android-Ios-Classification API: Ainize API
<br>
<br>
## Overview
Language model: bert-base-cased
<br>
Language: English
<br>
Training data: Question classification Android-Ios dataset from Kaggle
## Usage
| [
"## Bert-base-uncased for Android-Ios Question Classification\n\nCode: See Ainize Workspace\n<br>\nAndroid-Ios-Classification DEMO: Ainize Endpoint\n<br>\nDemo web Code: Github\n<br>\nAndroid-Ios-Classification API: Ainize API\n<br>\n<br>",
"## Overview\nLanguage model: bert-base-cased\n<br>\nLanguage: English\n<br>\nTraining data: Question classification Android-Ios dataset from Kaggle",
"## Usage"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## Bert-base-uncased for Android-Ios Question Classification\n\nCode: See Ainize Workspace\n<br>\nAndroid-Ios-Classification DEMO: Ainize Endpoint\n<br>\nDemo web Code: Github\n<br>\nAndroid-Ios-Classification API: Ainize API\n<br>\n<br>",
"## Overview\nLanguage model: bert-base-cased\n<br>\nLanguage: English\n<br>\nTraining data: Question classification Android-Ios dataset from Kaggle",
"## Usage"
] |
question-answering | transformers |
#### Klue-bert base for Common Sense QA
#### Klue-CommonSense-model DEMO: [Ainize DEMO](https://main-klue-common-sense-qa-east-h-shin.endpoint.ainize.ai/)
#### Klue-CommonSense-model API: [Ainize API](https://ainize.ai/EastHShin/Klue-CommonSense_QA?branch=main)
### Overview
**Language model**: klue/bert-base
<br>
**Language**: Korean
<br>
**Downstream-task**: Extractive QA
<br>
**Training data**: Common sense Data from [Mindslab](https://mindslab.ai:8080/kr/company)
<br>
**Eval data**: Common sense Data from [Mindslab](https://mindslab.ai:8080/kr/company)
<br>
**Code**: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/EastHShin/Klue-CommonSense-workspace)
<br>
### Usage
### In Transformers
```
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EasthShin/Klue-CommonSense-model")
model = AutoModelForQuestionAnswering.from_pretrained("EasthShin/Klue-CommonSense-model")
context = "your context"
question = "your question"
encodings = tokenizer(context, question, max_length=512, truncation=True,
padding="max_length", return_token_type_ids=False)
encodings = {key: torch.tensor([val]) for key, val in encodings.items()}
input_ids = encodings["input_ids"]
attention_mask = encodings["attention_mask"]
pred = model(input_ids, attention_mask=attention_mask)
start_logits, end_logits = pred.start_logits, pred.end_logits
token_start_index, token_end_index = start_logits.argmax(dim=-1), end_logits.argmax(dim=-1)
pred_ids = input_ids[0][token_start_index: token_end_index + 1]
prediction = tokenizer.decode(pred_ids)
``` | {} | EasthShin/Klue-CommonSense-model | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #question-answering #endpoints_compatible #region-us
|
#### Klue-bert base for Common Sense QA
#### Klue-CommonSense-model DEMO: Ainize DEMO
#### Klue-CommonSense-model API: Ainize API
### Overview
Language model: klue/bert-base
<br>
Language: Korean
<br>
Downstream-task: Extractive QA
<br>
Training data: Common sense Data from Mindslab
<br>
Eval data: Common sense Data from Mindslab
<br>
Code: See Ainize Workspace
<br>
### Usage
### In Transformers
| [
"#### Klue-bert base for Common Sense QA",
"#### Klue-CommonSense-model DEMO: Ainize DEMO",
"#### Klue-CommonSense-model API: Ainize API",
"### Overview\n\nLanguage model: klue/bert-base\n<br>\nLanguage: Korean\n<br>\nDownstream-task: Extractive QA\n<br>\nTraining data: Common sense Data from Mindslab\n<br>\nEval data: Common sense Data from Mindslab\n<br>\nCode: See Ainize Workspace\n<br>",
"### Usage",
"### In Transformers"
] | [
"TAGS\n#transformers #pytorch #bert #question-answering #endpoints_compatible #region-us \n",
"#### Klue-bert base for Common Sense QA",
"#### Klue-CommonSense-model DEMO: Ainize DEMO",
"#### Klue-CommonSense-model API: Ainize API",
"### Overview\n\nLanguage model: klue/bert-base\n<br>\nLanguage: Korean\n<br>\nDownstream-task: Extractive QA\n<br>\nTraining data: Common sense Data from Mindslab\n<br>\nEval data: Common sense Data from Mindslab\n<br>\nCode: See Ainize Workspace\n<br>",
"### Usage",
"### In Transformers"
] |
text-generation | transformers | ## Youth_Chatbot_KoGPT2-base
**Demo Web**: [Ainize Endpoint](https://main-youth-chatbot-ko-gpt2-base-east-h-shin.endpoint.ainize.ai/)
<br>
**Demo Web Code**: [Github](https://github.com/EastHShin/Youth_Chatbot_KoGPT2-base)
<br>
**Youth-Chatbot API**: [Ainize API](https://ainize.ai/EastHShin/Youth_Chatbot_KoGPT2-base_API?branch=main)
<br>
<br>
## Overview
**Language model**: KoGPT2
<br>
**Language**: Korean
<br>
**Training data**: [Aihub](https://aihub.or.kr/aidata/7978)
## Usage
```
from transformers import PreTrainedTokenizerFast, GPT2LMHeadModel
U_TKN = '<usr>'
S_TKN = '<sys>'
MASK = '<unused0>'
SENT = '<unused1>'
tokenizer = PreTrainedTokenizerFast.from_pretrained("EasthShin/Youth_Chatbot_Kogpt2-base",
bos_token='</s>', eos_token='</s>', unk_token='<unk>',
pad_token='<pad>', mask_token=MASK)
model = GPT2LMHeadModel.from_pretrained('EasthShin/Youth_Chatbot_Kogpt2-base')
input_ids = tokenizer.encode(U_TKN + {your text} + sent + S_TKN)
gen_ids = model.generate(torch.tensor([input_ids]),
max_length=128,
repetition_penalty= 2.0,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
bos_token_id=tokenizer.bos_token_id,
use_cache=True)
generated = tokenizer.decode(gen_ids[0, :].tolist())
print(generated)
``` | {} | EasthShin/Youth_Chatbot_Kogpt2-base | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| ## Youth_Chatbot_KoGPT2-base
Demo Web: Ainize Endpoint
<br>
Demo Web Code: Github
<br>
Youth-Chatbot API: Ainize API
<br>
<br>
## Overview
Language model: KoGPT2
<br>
Language: Korean
<br>
Training data: Aihub
## Usage
| [
"## Youth_Chatbot_KoGPT2-base\n\nDemo Web: Ainize Endpoint\n<br>\nDemo Web Code: Github\n<br>\nYouth-Chatbot API: Ainize API\n<br>\n<br>",
"## Overview\nLanguage model: KoGPT2\n<br>\nLanguage: Korean\n<br>\nTraining data: Aihub",
"## Usage"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## Youth_Chatbot_KoGPT2-base\n\nDemo Web: Ainize Endpoint\n<br>\nDemo Web Code: Github\n<br>\nYouth-Chatbot API: Ainize API\n<br>\n<br>",
"## Overview\nLanguage model: KoGPT2\n<br>\nLanguage: Korean\n<br>\nTraining data: Aihub",
"## Usage"
] |
fill-mask | transformers | #Arabic_BERT_Model
#ArBERTMo
| {} | Ebtihal/ArBERTMo | null | [
"transformers",
"tf",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #tf #camembert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| #Arabic_BERT_Model
#ArBERTMo
| [] | [
"TAGS\n#transformers #tf #camembert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers |
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
AraBERTMo_base uses the same BERT-Base config.
AraBERTMo_base now comes in 10 new variants
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V1' model was pre-trained on ~3 million words:
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 10010| 1 | 64 | 157 | 2m 2s | 9.0183 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V1")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V1")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
| {"language": "ar", "tags": "Fill-Mask", "datasets": "OSCAR", "widget": [{"text": " \u0627\u0644\u0633\u0644\u0627\u0645 \u0639\u0644\u064a\u0643\u0645 \u0648\u0631\u062d\u0645\u0629[MASK] \u0648\u0628\u0631\u0643\u0627\u062a\u0629"}, {"text": " \u0627\u0647\u0644\u0627 \u0648\u0633\u0647\u0644\u0627 \u0628\u0643\u0645 \u0641\u064a [MASK] \u0645\u0646 \u0633\u064a\u0631\u0628\u062d \u0627\u0644\u0645\u0644\u064a\u0648\u0646"}, {"text": " \u0645\u0631\u062d\u0628\u0627 \u0628\u0643 \u0639\u0632\u064a\u0632\u064a \u0627\u0644\u0632\u0627\u0626\u0631 [MASK] \u0645\u0648\u0642\u0639\u0646\u0627 "}]} | Ebtihal/AraBertMo_base_V1 | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"Fill-Mask",
"ar",
"dataset:OSCAR",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar"
] | TAGS
#transformers #pytorch #bert #fill-mask #Fill-Mask #ar #dataset-OSCAR #autotrain_compatible #endpoints_compatible #region-us
| Arabic BERT Model
=================
AraBERTMo is an Arabic pre-trained language model based on Google's BERT architechture.
AraBERTMo\_base uses the same BERT-Base config.
AraBERTMo\_base now comes in 10 new variants
All models are available on the 'HuggingFace' model page under the Ebtihal name.
Checkpoints are available in PyTorch formats.
Pretraining Corpus
------------------
'AraBertMo\_base\_V1' model was pre-trained on ~3 million words:
* OSCAR - Arabic version "unshuffled\_deduplicated\_ar".
Training results
----------------
this model achieves the following results:
Load Pretrained Model
---------------------
You can use this model by installing 'torch' or 'tensorflow' and Huggingface library 'transformers'. And you can use it directly by initializing it like this:
This model was built for master's degree research in an organization:
---------------------------------------------------------------------
* University of kufa.
* Faculty of Computer Science and Mathematics.
* Department of Computer Science
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #Fill-Mask #ar #dataset-OSCAR #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers |
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
AraBERTMo_base uses the same BERT-Base config.
AraBERTMo_base now comes in 10 new variants
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V2' model was pre-trained on ~3 million words:
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 20020| 2 | 64 | 626 | 19m 2s | 8.437 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V2")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V2")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
| {"language": "ar", "tags": "Fill-Mask", "datasets": "OSCAR", "widget": [{"text": " \u0627\u0644\u0633\u0644\u0627\u0645 \u0639\u0644\u064a\u0643\u0645 \u0648\u0631\u062d\u0645\u0629[MASK] \u0648\u0628\u0631\u0643\u0627\u062a\u0629"}, {"text": " \u0627\u0647\u0644\u0627 \u0648\u0633\u0647\u0644\u0627 \u0628\u0643\u0645 \u0641\u064a [MASK] \u0645\u0646 \u0633\u064a\u0631\u0628\u062d \u0627\u0644\u0645\u0644\u064a\u0648\u0646"}, {"text": " \u0645\u0631\u062d\u0628\u0627 \u0628\u0643 \u0639\u0632\u064a\u0632\u064a \u0627\u0644\u0632\u0627\u0626\u0631 [MASK] \u0645\u0648\u0642\u0639\u0646\u0627 "}]} | Ebtihal/AraBertMo_base_V2 | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"Fill-Mask",
"ar",
"dataset:OSCAR",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar"
] | TAGS
#transformers #pytorch #bert #fill-mask #Fill-Mask #ar #dataset-OSCAR #autotrain_compatible #endpoints_compatible #region-us
| Arabic BERT Model
=================
AraBERTMo is an Arabic pre-trained language model based on Google's BERT architechture.
AraBERTMo\_base uses the same BERT-Base config.
AraBERTMo\_base now comes in 10 new variants
All models are available on the 'HuggingFace' model page under the Ebtihal name.
Checkpoints are available in PyTorch formats.
Pretraining Corpus
------------------
'AraBertMo\_base\_V2' model was pre-trained on ~3 million words:
* OSCAR - Arabic version "unshuffled\_deduplicated\_ar".
Training results
----------------
this model achieves the following results:
Load Pretrained Model
---------------------
You can use this model by installing 'torch' or 'tensorflow' and Huggingface library 'transformers'. And you can use it directly by initializing it like this:
This model was built for master's degree research in an organization:
---------------------------------------------------------------------
* University of kufa.
* Faculty of Computer Science and Mathematics.
* Department of Computer Science
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #Fill-Mask #ar #dataset-OSCAR #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers |
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
AraBERTMo_base uses the same BERT-Base config.
AraBERTMo_base now comes in 10 new variants
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V3' model was pre-trained on ~3 million words:
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 30024| 3 | 64 | 1410 | 3h 10m 31s | 8.0201 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V3")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V3")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
| {"language": "ar", "tags": "Fill-Mask", "datasets": "OSCAR", "widget": [{"text": " \u0627\u0644\u0633\u0644\u0627\u0645 \u0639\u0644\u064a\u0643\u0645 \u0648\u0631\u062d\u0645\u0629[MASK] \u0648\u0628\u0631\u0643\u0627\u062a\u0629"}, {"text": " \u0627\u0647\u0644\u0627 \u0648\u0633\u0647\u0644\u0627 \u0628\u0643\u0645 \u0641\u064a [MASK] \u0645\u0646 \u0633\u064a\u0631\u0628\u062d \u0627\u0644\u0645\u0644\u064a\u0648\u0646"}, {"text": " \u0645\u0631\u062d\u0628\u0627 \u0628\u0643 \u0639\u0632\u064a\u0632\u064a \u0627\u0644\u0632\u0627\u0626\u0631 [MASK] \u0645\u0648\u0642\u0639\u0646\u0627 "}]} | Ebtihal/AraBertMo_base_V3 | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"Fill-Mask",
"ar",
"dataset:OSCAR",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar"
] | TAGS
#transformers #pytorch #bert #fill-mask #Fill-Mask #ar #dataset-OSCAR #autotrain_compatible #endpoints_compatible #region-us
| Arabic BERT Model
=================
AraBERTMo is an Arabic pre-trained language model based on Google's BERT architechture.
AraBERTMo\_base uses the same BERT-Base config.
AraBERTMo\_base now comes in 10 new variants
All models are available on the 'HuggingFace' model page under the Ebtihal name.
Checkpoints are available in PyTorch formats.
Pretraining Corpus
------------------
'AraBertMo\_base\_V3' model was pre-trained on ~3 million words:
* OSCAR - Arabic version "unshuffled\_deduplicated\_ar".
Training results
----------------
this model achieves the following results:
Load Pretrained Model
---------------------
You can use this model by installing 'torch' or 'tensorflow' and Huggingface library 'transformers'. And you can use it directly by initializing it like this:
This model was built for master's degree research in an organization:
---------------------------------------------------------------------
* University of kufa.
* Faculty of Computer Science and Mathematics.
* Department of Computer Science
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #Fill-Mask #ar #dataset-OSCAR #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers |
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
AraBERTMo_base uses the same BERT-Base config.
AraBERTMo_base now comes in 10 new variants
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V4' model was pre-trained on ~3 million words:
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 40032| 4 | 64 | 2500 | 5h 10m 20s | 7.6544 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V4")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V4")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
| {"language": "ar", "tags": "Fill-Mask", "datasets": "OSCAR", "widget": [{"text": " \u0627\u0644\u0633\u0644\u0627\u0645 \u0639\u0644\u064a\u0643\u0645 \u0648\u0631\u062d\u0645\u0629[MASK] \u0648\u0628\u0631\u0643\u0627\u062a\u0629"}, {"text": " \u0627\u0647\u0644\u0627 \u0648\u0633\u0647\u0644\u0627 \u0628\u0643\u0645 \u0641\u064a [MASK] \u0645\u0646 \u0633\u064a\u0631\u0628\u062d \u0627\u0644\u0645\u0644\u064a\u0648\u0646"}, {"text": " \u0645\u0631\u062d\u0628\u0627 \u0628\u0643 \u0639\u0632\u064a\u0632\u064a \u0627\u0644\u0632\u0627\u0626\u0631 [MASK] \u0645\u0648\u0642\u0639\u0646\u0627 "}]} | Ebtihal/AraBertMo_base_V4 | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"Fill-Mask",
"ar",
"dataset:OSCAR",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar"
] | TAGS
#transformers #pytorch #bert #fill-mask #Fill-Mask #ar #dataset-OSCAR #autotrain_compatible #endpoints_compatible #region-us
| Arabic BERT Model
=================
AraBERTMo is an Arabic pre-trained language model based on Google's BERT architechture.
AraBERTMo\_base uses the same BERT-Base config.
AraBERTMo\_base now comes in 10 new variants
All models are available on the 'HuggingFace' model page under the Ebtihal name.
Checkpoints are available in PyTorch formats.
Pretraining Corpus
------------------
'AraBertMo\_base\_V4' model was pre-trained on ~3 million words:
* OSCAR - Arabic version "unshuffled\_deduplicated\_ar".
Training results
----------------
this model achieves the following results:
Load Pretrained Model
---------------------
You can use this model by installing 'torch' or 'tensorflow' and Huggingface library 'transformers'. And you can use it directly by initializing it like this:
This model was built for master's degree research in an organization:
---------------------------------------------------------------------
* University of kufa.
* Faculty of Computer Science and Mathematics.
* Department of Computer Science
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #Fill-Mask #ar #dataset-OSCAR #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers |
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
AraBERTMo_base uses the same BERT-Base config.
AraBERTMo_base now comes in 10 new variants
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V5' model was pre-trained on ~3 million words:
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 50046| 5 | 64 | 3910 | 6h 49m 59s | 7.4599 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V5")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V5")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
| {"language": "ar", "tags": "Fill-Mask", "datasets": "OSCAR", "widget": [{"text": " \u0627\u0644\u0633\u0644\u0627\u0645 \u0639\u0644\u064a\u0643\u0645 \u0648\u0631\u062d\u0645\u0629[MASK] \u0648\u0628\u0631\u0643\u0627\u062a\u0629"}, {"text": " \u0627\u0647\u0644\u0627 \u0648\u0633\u0647\u0644\u0627 \u0628\u0643\u0645 \u0641\u064a [MASK] \u0645\u0646 \u0633\u064a\u0631\u0628\u062d \u0627\u0644\u0645\u0644\u064a\u0648\u0646"}, {"text": " \u0645\u0631\u062d\u0628\u0627 \u0628\u0643 \u0639\u0632\u064a\u0632\u064a \u0627\u0644\u0632\u0627\u0626\u0631 [MASK] \u0645\u0648\u0642\u0639\u0646\u0627 "}]} | Ebtihal/AraBertMo_base_V5 | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"Fill-Mask",
"ar",
"dataset:OSCAR",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar"
] | TAGS
#transformers #pytorch #bert #fill-mask #Fill-Mask #ar #dataset-OSCAR #autotrain_compatible #endpoints_compatible #region-us
| Arabic BERT Model
=================
AraBERTMo is an Arabic pre-trained language model based on Google's BERT architechture.
AraBERTMo\_base uses the same BERT-Base config.
AraBERTMo\_base now comes in 10 new variants
All models are available on the 'HuggingFace' model page under the Ebtihal name.
Checkpoints are available in PyTorch formats.
Pretraining Corpus
------------------
'AraBertMo\_base\_V5' model was pre-trained on ~3 million words:
* OSCAR - Arabic version "unshuffled\_deduplicated\_ar".
Training results
----------------
this model achieves the following results:
Load Pretrained Model
---------------------
You can use this model by installing 'torch' or 'tensorflow' and Huggingface library 'transformers'. And you can use it directly by initializing it like this:
This model was built for master's degree research in an organization:
---------------------------------------------------------------------
* University of kufa.
* Faculty of Computer Science and Mathematics.
* Department of Computer Science
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #Fill-Mask #ar #dataset-OSCAR #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | # Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
AraBERTMo_base uses the same BERT-Base config.
AraBERTMo_base now comes in 10 new variants
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V6' model was pre-trained on ~3 million words:
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 50046| 6 | 64 | 4692 | 5h 41m 9s | 7.3099 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V6")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V6")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
| {"language": "ar", "tags": "Fill-Mask", "datasets": "OSCAR", "widget": [{"text": " \u0627\u0644\u0633\u0644\u0627\u0645 \u0639\u0644\u064a\u0643\u0645 \u0648\u0631\u062d\u0645\u0629[MASK] \u0648\u0628\u0631\u0643\u0627\u062a\u0629"}, {"text": " \u0627\u0647\u0644\u0627 \u0648\u0633\u0647\u0644\u0627 \u0628\u0643\u0645 \u0641\u064a [MASK] \u0645\u0646 \u0633\u064a\u0631\u0628\u062d \u0627\u0644\u0645\u0644\u064a\u0648\u0646"}, {"text": " \u0645\u0631\u062d\u0628\u0627 \u0628\u0643 \u0639\u0632\u064a\u0632\u064a \u0627\u0644\u0632\u0627\u0626\u0631 [MASK] \u0645\u0648\u0642\u0639\u0646\u0627 "}]} | Ebtihal/AraBertMo_base_V6 | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"Fill-Mask",
"ar",
"dataset:OSCAR",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar"
] | TAGS
#transformers #pytorch #bert #fill-mask #Fill-Mask #ar #dataset-OSCAR #autotrain_compatible #endpoints_compatible #region-us
| Arabic BERT Model
=================
AraBERTMo is an Arabic pre-trained language model based on Google's BERT architechture.
AraBERTMo\_base uses the same BERT-Base config.
AraBERTMo\_base now comes in 10 new variants
All models are available on the 'HuggingFace' model page under the Ebtihal name.
Checkpoints are available in PyTorch formats.
Pretraining Corpus
------------------
'AraBertMo\_base\_V6' model was pre-trained on ~3 million words:
* OSCAR - Arabic version "unshuffled\_deduplicated\_ar".
Training results
----------------
this model achieves the following results:
Load Pretrained Model
---------------------
You can use this model by installing 'torch' or 'tensorflow' and Huggingface library 'transformers'. And you can use it directly by initializing it like this:
This model was built for master's degree research in an organization:
---------------------------------------------------------------------
* University of kufa.
* Faculty of Computer Science and Mathematics.
* Department of Computer Science
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #Fill-Mask #ar #dataset-OSCAR #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | Arabic Model AraBertMo_base_V7
---
language: ar
tags: Fill-Mask
datasets: OSCAR
widget:
- text: " السلام عليكم ورحمة[MASK] وبركاتة"
- text: " اهلا وسهلا بكم في [MASK] من سيربح المليون"
- text: " مرحبا بك عزيزي الزائر [MASK] موقعنا "
---
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
AraBERTMo_base uses the same BERT-Base config.
AraBERTMo_base now comes in 10 new variants
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V7' model was pre-trained on ~3 million words:
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 50046| 7 | 64 | 5915 | 5h 23m 5s | 7.1381 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V7")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V7")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
| {} | Ebtihal/AraBertMo_base_V7 | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| Arabic Model AraBertMo\_base\_V7
---
language: ar
tags: Fill-Mask
datasets: OSCAR
widget:
* text: " السلام عليكم ورحمة[MASK] وبركاتة"
* text: " اهلا وسهلا بكم في [MASK] من سيربح المليون"
* text: " مرحبا بك عزيزي الزائر [MASK] موقعنا "
---
Arabic BERT Model
=================
AraBERTMo is an Arabic pre-trained language model based on Google's BERT architechture.
AraBERTMo\_base uses the same BERT-Base config.
AraBERTMo\_base now comes in 10 new variants
All models are available on the 'HuggingFace' model page under the Ebtihal name.
Checkpoints are available in PyTorch formats.
Pretraining Corpus
------------------
'AraBertMo\_base\_V7' model was pre-trained on ~3 million words:
* OSCAR - Arabic version "unshuffled\_deduplicated\_ar".
Training results
----------------
this model achieves the following results:
Load Pretrained Model
---------------------
You can use this model by installing 'torch' or 'tensorflow' and Huggingface library 'transformers'. And you can use it directly by initializing it like this:
This model was built for master's degree research in an organization:
---------------------------------------------------------------------
* University of kufa.
* Faculty of Computer Science and Mathematics.
* Department of Computer Science
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | Arabic Model AraBertMo_base_V8
---
language: ar
tags: Fill-Mask
datasets: OSCAR
widget:
- text: " السلام عليكم ورحمة[MASK] وبركاتة"
- text: " اهلا وسهلا بكم في [MASK] من سيربح المليون"
- text: " مرحبا بك عزيزي الزائر [MASK] موقعنا "
---
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERTMo_base uses the same BERT-Base config. AraBERTMo_base now comes in 10 new variants All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name. Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V8' model was pre-trained on ~3 million words: [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 40032| 8 | 64 | 5008 | 10h 5m 57s | 7.2164 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V8")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V8")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
| {} | Ebtihal/AraBertMo_base_V8 | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| Arabic Model AraBertMo\_base\_V8
---
language: ar
tags: Fill-Mask
datasets: OSCAR
widget:
* text: " السلام عليكم ورحمة[MASK] وبركاتة"
* text: " اهلا وسهلا بكم في [MASK] من سيربح المليون"
* text: " مرحبا بك عزيزي الزائر [MASK] موقعنا "
---
Arabic BERT Model
=================
AraBERTMo is an Arabic pre-trained language model based on Google's BERT architechture. AraBERTMo\_base uses the same BERT-Base config. AraBERTMo\_base now comes in 10 new variants All models are available on the 'HuggingFace' model page under the Ebtihal name. Checkpoints are available in PyTorch formats.
Pretraining Corpus
------------------
'AraBertMo\_base\_V8' model was pre-trained on ~3 million words: OSCAR - Arabic version "unshuffled\_deduplicated\_ar".
Training results
----------------
this model achieves the following results:
Load Pretrained Model
---------------------
You can use this model by installing 'torch' or 'tensorflow' and Huggingface library 'transformers'. And you can use it directly by initializing it like this:
This model was built for master's degree research in an organization:
---------------------------------------------------------------------
* University of kufa.
* Faculty of Computer Science and Mathematics.
* Department of Computer Science
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | Arabic Model AraBertMo_base_V9
---
language: ar
tags: Fill-Mask
datasets: OSCAR
widget:
- text: " السلام عليكم ورحمة[MASK] وبركاتة"
- text: " اهلا وسهلا بكم في [MASK] من سيربح المليون"
- text: " مرحبا بك عزيزي الزائر [MASK] موقعنا "
---
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
AraBERTMo_base uses the same BERT-Base config.
AraBERTMo_base now comes in 10 new variants
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V9' model was pre-trained on ~3 million words:
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 30024| 9 | 64 | 4230 | 7h 57m 42s | 7.3264 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V9")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V9")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
| {} | Ebtihal/AraBertMo_base_V9 | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| Arabic Model AraBertMo\_base\_V9
---
language: ar
tags: Fill-Mask
datasets: OSCAR
widget:
* text: " السلام عليكم ورحمة[MASK] وبركاتة"
* text: " اهلا وسهلا بكم في [MASK] من سيربح المليون"
* text: " مرحبا بك عزيزي الزائر [MASK] موقعنا "
---
Arabic BERT Model
=================
AraBERTMo is an Arabic pre-trained language model based on Google's BERT architechture.
AraBERTMo\_base uses the same BERT-Base config.
AraBERTMo\_base now comes in 10 new variants
All models are available on the 'HuggingFace' model page under the Ebtihal name.
Checkpoints are available in PyTorch formats.
Pretraining Corpus
------------------
'AraBertMo\_base\_V9' model was pre-trained on ~3 million words:
* OSCAR - Arabic version "unshuffled\_deduplicated\_ar".
Training results
----------------
this model achieves the following results:
Load Pretrained Model
---------------------
You can use this model by installing 'torch' or 'tensorflow' and Huggingface library 'transformers'. And you can use it directly by initializing it like this:
This model was built for master's degree research in an organization:
---------------------------------------------------------------------
* University of kufa.
* Faculty of Computer Science and Mathematics.
* Department of Computer Science
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2886
- Bleu: 28.1641
- Gen Len: 34.1071
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7436 | 1.0 | 38145 | 1.2886 | 28.1641 | 34.1071 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["wmt16"], "metrics": ["bleu"], "model_index": [{"name": "opus-mt-en-ro-finetuned-en-to-ro", "results": [{"task": {"name": "Sequence-to-sequence Language Modeling", "type": "text2text-generation"}, "dataset": {"name": "wmt16", "type": "wmt16", "args": "ro-en"}, "metric": {"name": "Bleu", "type": "bleu", "value": 28.1641}}]}]} | Edomonndo/opus-mt-en-ro-finetuned-en-to-ro | null | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #dataset-wmt16 #autotrain_compatible #endpoints_compatible #region-us
| opus-mt-en-ro-finetuned-en-to-ro
================================
This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-ro on the wmt16 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2886
* Bleu: 28.1641
* Gen Len: 34.1071
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.9.1
* Pytorch 1.9.0+cu102
* Datasets 1.10.2
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #dataset-wmt16 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ja-en-finetuned-ja-to-en_test
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4737
- Bleu: 80.2723
- Gen Len: 16.5492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.1237 | 1.0 | 247 | 0.6131 | 60.9383 | 16.4152 |
| 0.5395 | 2.0 | 494 | 0.5274 | 67.5705 | 16.2883 |
| 0.3584 | 3.0 | 741 | 0.5122 | 71.3098 | 16.3777 |
| 0.2563 | 4.0 | 988 | 0.4887 | 73.6639 | 16.401 |
| 0.138 | 5.0 | 1235 | 0.4796 | 76.7942 | 16.4873 |
| 0.0979 | 6.0 | 1482 | 0.4849 | 76.9404 | 16.6162 |
| 0.0792 | 7.0 | 1729 | 0.4806 | 78.9831 | 16.5442 |
| 0.0569 | 8.0 | 1976 | 0.4765 | 79.3461 | 16.4873 |
| 0.0299 | 9.0 | 2223 | 0.4751 | 79.7901 | 16.4863 |
| 0.0204 | 10.0 | 2470 | 0.4737 | 80.2723 | 16.5492 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu111
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "model_index": [{"name": "opus-mt-ja-en-finetuned-ja-to-en_test", "results": [{"task": {"name": "Sequence-to-sequence Language Modeling", "type": "text2text-generation"}, "metric": {"name": "Bleu", "type": "bleu", "value": 80.2723}}]}]} | Edomonndo/opus-mt-ja-en-finetuned-ja-to-en_test | null | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| opus-mt-ja-en-finetuned-ja-to-en\_test
======================================
This model is a fine-tuned version of Helsinki-NLP/opus-mt-ja-en on an unkown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4737
* Bleu: 80.2723
* Gen Len: 16.5492
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.9.1
* Pytorch 1.9.0+cu111
* Datasets 1.10.2
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu111\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu111\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ja-en-finetuned-ja-to-en_xml
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7520
- Bleu: 73.8646
- Gen Len: 27.0884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.0512 | 1.0 | 748 | 0.8333 | 59.8234 | 27.905 |
| 0.6076 | 2.0 | 1496 | 0.7817 | 62.5606 | 26.1834 |
| 0.4174 | 3.0 | 2244 | 0.7817 | 64.8346 | 28.2918 |
| 0.2971 | 4.0 | 2992 | 0.7653 | 67.6013 | 27.2222 |
| 0.2172 | 5.0 | 3740 | 0.7295 | 69.4017 | 27.0174 |
| 0.1447 | 6.0 | 4488 | 0.7522 | 68.8355 | 28.2865 |
| 0.0953 | 7.0 | 5236 | 0.7596 | 71.4743 | 27.1861 |
| 0.0577 | 8.0 | 5984 | 0.7469 | 72.0684 | 26.921 |
| 0.04 | 9.0 | 6732 | 0.7526 | 73.2821 | 27.1365 |
| 0.0213 | 10.0 | 7480 | 0.7520 | 73.8646 | 27.0884 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.10.0+cu111
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "model_index": [{"name": "opus-mt-ja-en-finetuned-ja-to-en_xml", "results": [{"task": {"name": "Sequence-to-sequence Language Modeling", "type": "text2text-generation"}, "metric": {"name": "Bleu", "type": "bleu", "value": 73.8646}}]}]} | Edomonndo/opus-mt-ja-en-finetuned-ja-to-en_xml | null | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| opus-mt-ja-en-finetuned-ja-to-en\_xml
=====================================
This model is a fine-tuned version of Helsinki-NLP/opus-mt-ja-en on an unkown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7520
* Bleu: 73.8646
* Gen Len: 27.0884
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.9.1
* Pytorch 1.10.0+cu111
* Datasets 1.10.2
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition | transformers |
# Wav2vec2 Large 100k Voxpopuli fine-tuned with Common Voice and TTS-Portuguese Corpus in Portuguese
[Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) fine-tuned in Portuguese using the Common Voice 7.0 and TTS-Portuguese Corpus.
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-portuguese")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-portuguese")
```
# Results
For the results check the [paper](https://arxiv.org/abs/2204.00618)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
| {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "portuguese-speech-corpus", "automatic-speech-recognition", "speech", "PyTorch"], "datasets": ["Common Voice"], "metrics": ["wer"]} | Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-portuguese | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"arxiv:2204.00618",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2204.00618"
] | [
"pt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #arxiv-2204.00618 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2vec2 Large 100k Voxpopuli fine-tuned with Common Voice and TTS-Portuguese Corpus in Portuguese
Wav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using the Common Voice 7.0 and TTS-Portuguese Corpus.
# Use this model
# Results
For the results check the paper
# Example test with Common Voice Dataset
| [
"# Wav2vec2 Large 100k Voxpopuli fine-tuned with Common Voice and TTS-Portuguese Corpus in Portuguese \n\nWav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using the Common Voice 7.0 and TTS-Portuguese Corpus.",
"# Use this model",
"# Results\nFor the results check the paper",
"# Example test with Common Voice Dataset"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #arxiv-2204.00618 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2vec2 Large 100k Voxpopuli fine-tuned with Common Voice and TTS-Portuguese Corpus in Portuguese \n\nWav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using the Common Voice 7.0 and TTS-Portuguese Corpus.",
"# Use this model",
"# Results\nFor the results check the paper",
"# Example test with Common Voice Dataset"
] |
automatic-speech-recognition | transformers |
# Wav2vec2 Large 100k Voxpopuli fine-tuned with Common Voice and M-AILABS in Russian
[Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) fine-tuned in Russian using the Common Voice 7.0 and M-AILABS.
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-russian")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-russian")
```
# Results
For the results check the [paper](https://arxiv.org/abs/2204.00618)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
| {"language": "ru", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "ru", "russian-speech-corpus", "automatic-speech-recognition", "speech", "PyTorch"], "datasets": ["Common Voice"], "metrics": ["wer"]} | Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-russian | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"ru",
"russian-speech-corpus",
"PyTorch",
"arxiv:2204.00618",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2204.00618"
] | [
"ru"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #ru #russian-speech-corpus #PyTorch #arxiv-2204.00618 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2vec2 Large 100k Voxpopuli fine-tuned with Common Voice and M-AILABS in Russian
Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0 and M-AILABS.
# Use this model
# Results
For the results check the paper
# Example test with Common Voice Dataset
| [
"# Wav2vec2 Large 100k Voxpopuli fine-tuned with Common Voice and M-AILABS in Russian \n\nWav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0 and M-AILABS.",
"# Use this model",
"# Results\nFor the results check the paper",
"# Example test with Common Voice Dataset"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #ru #russian-speech-corpus #PyTorch #arxiv-2204.00618 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2vec2 Large 100k Voxpopuli fine-tuned with Common Voice and M-AILABS in Russian \n\nWav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0 and M-AILABS.",
"# Use this model",
"# Results\nFor the results check the paper",
"# Example test with Common Voice Dataset"
] |
automatic-speech-recognition | transformers |
# Wav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using the Common Voice 7.0, TTS-Portuguese Corpus plus data augmentation
[Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) Wav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using the Common Voice 7.0, TTS-Portuguese plus data augmentation method based on TTS and voice conversion.
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-portuguese")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-portuguese")
```
# Results
For the results check the [paper](https://arxiv.org/abs/2204.00618)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "ru", split="test", data_dir="./cv-corpus-7.0-2021-07-21")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
| {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "Portuguese-speech-corpus", "automatic-speech-recognition", "speech", "PyTorch"], "datasets": ["Common Voice"], "metrics": ["wer"]} | Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-portuguese | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"Portuguese-speech-corpus",
"PyTorch",
"arxiv:2204.00618",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2204.00618"
] | [
"pt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #Portuguese-speech-corpus #PyTorch #arxiv-2204.00618 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using the Common Voice 7.0, TTS-Portuguese Corpus plus data augmentation
Wav2vec2 Large 100k Voxpopuli Wav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using the Common Voice 7.0, TTS-Portuguese plus data augmentation method based on TTS and voice conversion.
# Use this model
# Results
For the results check the paper
# Example test with Common Voice Dataset
| [
"# Wav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using the Common Voice 7.0, TTS-Portuguese Corpus plus data augmentation\n\nWav2vec2 Large 100k Voxpopuli Wav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using the Common Voice 7.0, TTS-Portuguese plus data augmentation method based on TTS and voice conversion.",
"# Use this model",
"# Results\nFor the results check the paper",
"# Example test with Common Voice Dataset"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #Portuguese-speech-corpus #PyTorch #arxiv-2204.00618 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using the Common Voice 7.0, TTS-Portuguese Corpus plus data augmentation\n\nWav2vec2 Large 100k Voxpopuli Wav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using the Common Voice 7.0, TTS-Portuguese plus data augmentation method based on TTS and voice conversion.",
"# Use this model",
"# Results\nFor the results check the paper",
"# Example test with Common Voice Dataset"
] |
automatic-speech-recognition | transformers |
# Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0, MAILABS plus data augmentation
[Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0, M-AILABS plus data augmentation method based on TTS and voice conversion.
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-russian")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-russian")
```
# Results
For the results check the [paper](https://arxiv.org/abs/2204.00618)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "ru", split="test", data_dir="./cv-corpus-7.0-2021-07-21")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
| {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "Russian-speech-corpus", "automatic-speech-recognition", "speech", "PyTorch"], "datasets": ["Common Voice"], "metrics": ["wer"]} | Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-russian | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"Russian-speech-corpus",
"PyTorch",
"arxiv:2204.00618",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2204.00618"
] | [
"pt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #Russian-speech-corpus #PyTorch #arxiv-2204.00618 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0, MAILABS plus data augmentation
Wav2vec2 Large 100k Voxpopuli Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0, M-AILABS plus data augmentation method based on TTS and voice conversion.
# Use this model
# Results
For the results check the paper
# Example test with Common Voice Dataset
| [
"# Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0, MAILABS plus data augmentation\n\nWav2vec2 Large 100k Voxpopuli Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0, M-AILABS plus data augmentation method based on TTS and voice conversion.",
"# Use this model",
"# Results\nFor the results check the paper",
"# Example test with Common Voice Dataset"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #Russian-speech-corpus #PyTorch #arxiv-2204.00618 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0, MAILABS plus data augmentation\n\nWav2vec2 Large 100k Voxpopuli Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0, M-AILABS plus data augmentation method based on TTS and voice conversion.",
"# Use this model",
"# Results\nFor the results check the paper",
"# Example test with Common Voice Dataset"
] |
automatic-speech-recognition | transformers |
# Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Portuguese
[Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) fine-tuned in Portuguese using a single-speaker dataset plus a data augmentation method based on TTS and voice conversion.
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-portuguese")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-portuguese")
```
# Results
For the results check the [paper](https://arxiv.org/abs/2204.00618)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-7.0-2021-07-21")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
| {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "portuguese-speech-corpus", "automatic-speech-recognition", "speech", "PyTorch"], "datasets": ["Common Voice"], "metrics": ["wer"]} | Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-portuguese | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"arxiv:2204.00618",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2204.00618"
] | [
"pt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #arxiv-2204.00618 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Portuguese
Wav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using a single-speaker dataset plus a data augmentation method based on TTS and voice conversion.
# Use this model
# Results
For the results check the paper
# Example test with Common Voice Dataset
| [
"# Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Portuguese \n\nWav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using a single-speaker dataset plus a data augmentation method based on TTS and voice conversion.",
"# Use this model",
"# Results\nFor the results check the paper",
"# Example test with Common Voice Dataset"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #arxiv-2204.00618 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Portuguese \n\nWav2vec2 Large 100k Voxpopuli fine-tuned in Portuguese using a single-speaker dataset plus a data augmentation method based on TTS and voice conversion.",
"# Use this model",
"# Results\nFor the results check the paper",
"# Example test with Common Voice Dataset"
] |
automatic-speech-recognition | transformers |
# Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Russian
[Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) fine-tuned in Russian using a single-speaker dataset plus a data augmentation method based on TTS and voice conversion.
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-russian")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-russian")
```
# Results
For the results check the [paper](https://arxiv.org/abs/2204.00618)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "ru", split="test", data_dir="./cv-corpus-7.0-2021-07-21")
resampler = torchaudio.transforms.Resampl(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
| {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "Russian-speech-corpus", "automatic-speech-recognition", "speech", "PyTorch"], "datasets": ["Common Voice"], "metrics": ["wer"]} | Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-russian | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"Russian-speech-corpus",
"PyTorch",
"arxiv:2204.00618",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2204.00618"
] | [
"pt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #Russian-speech-corpus #PyTorch #arxiv-2204.00618 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Russian
Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using a single-speaker dataset plus a data augmentation method based on TTS and voice conversion.
# Use this model
# Results
For the results check the paper
# Example test with Common Voice Dataset
| [
"# Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Russian \n\nWav2vec2 Large 100k Voxpopuli fine-tuned in Russian using a single-speaker dataset plus a data augmentation method based on TTS and voice conversion.",
"# Use this model",
"# Results\nFor the results check the paper",
"# Example test with Common Voice Dataset"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #Russian-speech-corpus #PyTorch #arxiv-2204.00618 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Russian \n\nWav2vec2 Large 100k Voxpopuli fine-tuned in Russian using a single-speaker dataset plus a data augmentation method based on TTS and voice conversion.",
"# Use this model",
"# Results\nFor the results check the paper",
"# Example test with Common Voice Dataset"
] |
automatic-speech-recognition | transformers |
# Wav2vec 2.0 trained with CORAA Portuguese Dataset
This a the demonstration of a fine-tuned Wav2vec model for Portuguese using the following [CORAA dataset](https://github.com/nilc-nlp/CORAA)
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-xlsr-coraa-portuguese")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-xlsr-coraa-portuguese")
```
# Results
For the results check the [CORAA article](https://arxiv.org/abs/2110.15731)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
| {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "portuguese-speech-corpus", "automatic-speech-recognition", "hf-asr-leaderboard", "speech", "PyTorch"], "datasets": ["CORAA"], "metrics": ["wer"], "model-index": [{"name": "Edresson Casanova XLSR Wav2Vec2 Large 53 Portuguese", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "CORAA", "type": "CORAA", "args": "pt"}, "metrics": [{"type": "wer", "value": 25.26, "name": "Test CORAA WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "pt"}, "metrics": [{"type": "wer", "value": 20.08, "name": "Test WER on Common Voice 7"}]}]}]} | Edresson/wav2vec2-large-xlsr-coraa-portuguese | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"hf-asr-leaderboard",
"PyTorch",
"dataset:CORAA",
"arxiv:2110.15731",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2110.15731"
] | [
"pt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #hf-asr-leaderboard #PyTorch #dataset-CORAA #arxiv-2110.15731 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2vec 2.0 trained with CORAA Portuguese Dataset
This a the demonstration of a fine-tuned Wav2vec model for Portuguese using the following CORAA dataset
# Use this model
# Results
For the results check the CORAA article
# Example test with Common Voice Dataset
| [
"# Wav2vec 2.0 trained with CORAA Portuguese Dataset\n\nThis a the demonstration of a fine-tuned Wav2vec model for Portuguese using the following CORAA dataset",
"# Use this model",
"# Results\nFor the results check the CORAA article",
"# Example test with Common Voice Dataset"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #hf-asr-leaderboard #PyTorch #dataset-CORAA #arxiv-2110.15731 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2vec 2.0 trained with CORAA Portuguese Dataset\n\nThis a the demonstration of a fine-tuned Wav2vec model for Portuguese using the following CORAA dataset",
"# Use this model",
"# Results\nFor the results check the CORAA article",
"# Example test with Common Voice Dataset"
] |
summarization | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PegasusXSUM_GNAD
This model is a fine-tuned version of [Einmalumdiewelt/PegasusXSUM_GNAD](https://huggingface.co/Einmalumdiewelt/PegasusXSUM_GNAD) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4386
- Rouge1: 26.7818
- Rouge2: 7.6864
- Rougel: 18.6264
- Rougelsum: 22.822
- Gen Len: 67.076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| {"language": ["de"], "tags": ["generated_from_trainer", "summarization"], "metrics": ["rouge"], "model-index": [{"name": "PegasusXSUM_GNAD", "results": []}]} | Einmalumdiewelt/PegasusXSUM_GNAD | null | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"summarization",
"de",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"de"
] | TAGS
#transformers #pytorch #pegasus #text2text-generation #generated_from_trainer #summarization #de #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# PegasusXSUM_GNAD
This model is a fine-tuned version of Einmalumdiewelt/PegasusXSUM_GNAD on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4386
- Rouge1: 26.7818
- Rouge2: 7.6864
- Rougel: 18.6264
- Rougelsum: 22.822
- Gen Len: 67.076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| [
"# PegasusXSUM_GNAD\n\nThis model is a fine-tuned version of Einmalumdiewelt/PegasusXSUM_GNAD on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.4386\n- Rouge1: 26.7818\n- Rouge2: 7.6864\n- Rougel: 18.6264\n- Rougelsum: 22.822\n- Gen Len: 67.076",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.22.0.dev0\n- Pytorch 1.12.0+cu113\n- Datasets 2.4.0\n- Tokenizers 0.12.1"
] | [
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #generated_from_trainer #summarization #de #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# PegasusXSUM_GNAD\n\nThis model is a fine-tuned version of Einmalumdiewelt/PegasusXSUM_GNAD on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.4386\n- Rouge1: 26.7818\n- Rouge2: 7.6864\n- Rougel: 18.6264\n- Rougelsum: 22.822\n- Gen Len: 67.076",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.22.0.dev0\n- Pytorch 1.12.0+cu113\n- Datasets 2.4.0\n- Tokenizers 0.12.1"
] |
summarization | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-Base_GNAD
This model is a fine-tuned version of [Einmalumdiewelt/T5-Base_GNAD](https://huggingface.co/Einmalumdiewelt/T5-Base_GNAD) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1025
- Rouge1: 27.5357
- Rouge2: 8.5623
- Rougel: 19.1508
- Rougelsum: 23.9029
- Gen Len: 52.7253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| {"language": ["de"], "tags": ["generated_from_trainer", "summarization"], "metrics": ["rouge"], "model-index": [{"name": "T5-Base_GNAD", "results": []}]} | Einmalumdiewelt/T5-Base_GNAD | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"de",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"de"
] | TAGS
#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #summarization #de #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# T5-Base_GNAD
This model is a fine-tuned version of Einmalumdiewelt/T5-Base_GNAD on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1025
- Rouge1: 27.5357
- Rouge2: 8.5623
- Rougel: 19.1508
- Rougelsum: 23.9029
- Gen Len: 52.7253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| [
"# T5-Base_GNAD\n\nThis model is a fine-tuned version of Einmalumdiewelt/T5-Base_GNAD on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.1025\n- Rouge1: 27.5357\n- Rouge2: 8.5623\n- Rougel: 19.1508\n- Rougelsum: 23.9029\n- Gen Len: 52.7253",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.22.0.dev0\n- Pytorch 1.12.0+cu113\n- Datasets 2.4.0\n- Tokenizers 0.12.1"
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #summarization #de #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# T5-Base_GNAD\n\nThis model is a fine-tuned version of Einmalumdiewelt/T5-Base_GNAD on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.1025\n- Rouge1: 27.5357\n- Rouge2: 8.5623\n- Rougel: 19.1508\n- Rougelsum: 23.9029\n- Gen Len: 52.7253",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.22.0.dev0\n- Pytorch 1.12.0+cu113\n- Datasets 2.4.0\n- Tokenizers 0.12.1"
] |
null | transformers |
# Enformer
Enformer model. It was introduced in the paper [Effective gene expression prediction from sequence by integrating long-range interactions.](https://www.nature.com/articles/s41592-021-01252-x) by Avsec et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/enformer).
This particular model was trained on sequences of 196,608 basepairs, target length 896, with shift augmentation but without reverse complement, on poisson loss objective. Final human pearson R of ~0.45.
This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the [enformer-pytorch repository](https://github.com/lucidrains/enformer-pytorch).
Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.
We refer to the [paper](https://www.nature.com/articles/s41592-021-01252-x) published in Nature for details.
### How to use
Refer to the README of [enformer-pytorch](https://github.com/lucidrains/enformer-pytorch) regarding usage.
### Citation info
```
Avsec, Ž., Agarwal, V., Visentin, D. et al. Effective gene expression prediction from sequence by integrating long-range interactions. Nat Methods 18, 1196–1203 (2021). https://doi.org/10.1038/s41592-021-01252-x
``` | {"license": "apache-2.0", "inference": false} | EleutherAI/enformer-191k | null | [
"transformers",
"pytorch",
"enformer",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #enformer #license-apache-2.0 #region-us
|
# Enformer
Enformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository.
This particular model was trained on sequences of 196,608 basepairs, target length 896, with shift augmentation but without reverse complement, on poisson loss objective. Final human pearson R of ~0.45.
This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository.
Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.
We refer to the paper published in Nature for details.
### How to use
Refer to the README of enformer-pytorch regarding usage.
info
| [
"# Enformer\n\nEnformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository. \n\nThis particular model was trained on sequences of 196,608 basepairs, target length 896, with shift augmentation but without reverse complement, on poisson loss objective. Final human pearson R of ~0.45.\n\nThis repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository.\n\nDisclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nEnformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.\n\nWe refer to the paper published in Nature for details.",
"### How to use\n\nRefer to the README of enformer-pytorch regarding usage.\n\n\ninfo"
] | [
"TAGS\n#transformers #pytorch #enformer #license-apache-2.0 #region-us \n",
"# Enformer\n\nEnformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository. \n\nThis particular model was trained on sequences of 196,608 basepairs, target length 896, with shift augmentation but without reverse complement, on poisson loss objective. Final human pearson R of ~0.45.\n\nThis repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository.\n\nDisclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nEnformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.\n\nWe refer to the paper published in Nature for details.",
"### How to use\n\nRefer to the README of enformer-pytorch regarding usage.\n\n\ninfo"
] |
null | transformers |
# Enformer
Enformer model. It was introduced in the paper [Effective gene expression prediction from sequence by integrating long-range interactions.](https://www.nature.com/articles/s41592-021-01252-x) by Avsec et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/enformer).
This particular model was trained on sequences of 196,608 basepairs, target length 896, with shift augmentation but without reverse complement, on poisson loss objective. Final human pearson R of ~0.49.
This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the [enformer-pytorch repository](https://github.com/lucidrains/enformer-pytorch).
Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.
We refer to the [paper](https://www.nature.com/articles/s41592-021-01252-x) published in Nature for details.
### How to use
Refer to the README of [enformer-pytorch](https://github.com/lucidrains/enformer-pytorch) regarding usage.
### Citation info
```
Avsec, Ž., Agarwal, V., Visentin, D. et al. Effective gene expression prediction from sequence by integrating long-range interactions. Nat Methods 18, 1196–1203 (2021). https://doi.org/10.1038/s41592-021-01252-x
``` | {"license": "apache-2.0", "inference": false} | EleutherAI/enformer-191k_corr_coef_obj | null | [
"transformers",
"pytorch",
"enformer",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #enformer #license-apache-2.0 #region-us
|
# Enformer
Enformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository.
This particular model was trained on sequences of 196,608 basepairs, target length 896, with shift augmentation but without reverse complement, on poisson loss objective. Final human pearson R of ~0.49.
This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository.
Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.
We refer to the paper published in Nature for details.
### How to use
Refer to the README of enformer-pytorch regarding usage.
info
| [
"# Enformer\n\nEnformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository. \n\nThis particular model was trained on sequences of 196,608 basepairs, target length 896, with shift augmentation but without reverse complement, on poisson loss objective. Final human pearson R of ~0.49.\n\nThis repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository.\n\nDisclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nEnformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.\n\nWe refer to the paper published in Nature for details.",
"### How to use\n\nRefer to the README of enformer-pytorch regarding usage.\n\n\ninfo"
] | [
"TAGS\n#transformers #pytorch #enformer #license-apache-2.0 #region-us \n",
"# Enformer\n\nEnformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository. \n\nThis particular model was trained on sequences of 196,608 basepairs, target length 896, with shift augmentation but without reverse complement, on poisson loss objective. Final human pearson R of ~0.49.\n\nThis repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository.\n\nDisclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nEnformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.\n\nWe refer to the paper published in Nature for details.",
"### How to use\n\nRefer to the README of enformer-pytorch regarding usage.\n\n\ninfo"
] |
null | transformers |
# Enformer
Enformer model. It was introduced in the paper [Effective gene expression prediction from sequence by integrating long-range interactions.](https://www.nature.com/articles/s41592-021-01252-x) by Avsec et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/enformer).
This particular model was trained on sequences of 131,072 basepairs, target length 896 on v3-64 TPUs for 3 days with sequence augmentations and pearson correlation objective.
This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the [enformer-pytorch repository](https://github.com/lucidrains/enformer-pytorch).
Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.
We refer to the [paper](https://www.nature.com/articles/s41592-021-01252-x) published in Nature for details.
### How to use
Refer to the README of [enformer-pytorch](https://github.com/lucidrains/enformer-pytorch) regarding usage.
### Citation info
```
Avsec, Ž., Agarwal, V., Visentin, D. et al. Effective gene expression prediction from sequence by integrating long-range interactions. Nat Methods 18, 1196–1203 (2021). https://doi.org/10.1038/s41592-021-01252-x
``` | {"license": "apache-2.0", "inference": false} | EleutherAI/enformer-corr_coef_obj | null | [
"transformers",
"pytorch",
"enformer",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #enformer #license-apache-2.0 #region-us
|
# Enformer
Enformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository.
This particular model was trained on sequences of 131,072 basepairs, target length 896 on v3-64 TPUs for 3 days with sequence augmentations and pearson correlation objective.
This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository.
Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.
We refer to the paper published in Nature for details.
### How to use
Refer to the README of enformer-pytorch regarding usage.
info
| [
"# Enformer\n\nEnformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository. \n\nThis particular model was trained on sequences of 131,072 basepairs, target length 896 on v3-64 TPUs for 3 days with sequence augmentations and pearson correlation objective.\n\nThis repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository.\n\nDisclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nEnformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.\n\nWe refer to the paper published in Nature for details.",
"### How to use\n\nRefer to the README of enformer-pytorch regarding usage.\n\n\ninfo"
] | [
"TAGS\n#transformers #pytorch #enformer #license-apache-2.0 #region-us \n",
"# Enformer\n\nEnformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository. \n\nThis particular model was trained on sequences of 131,072 basepairs, target length 896 on v3-64 TPUs for 3 days with sequence augmentations and pearson correlation objective.\n\nThis repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository.\n\nDisclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nEnformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.\n\nWe refer to the paper published in Nature for details.",
"### How to use\n\nRefer to the README of enformer-pytorch regarding usage.\n\n\ninfo"
] |
null | transformers |
# Enformer
Enformer model. It was introduced in the paper [Effective gene expression prediction from sequence by integrating long-range interactions.](https://www.nature.com/articles/s41592-021-01252-x) by Avsec et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/enformer).
This particular model was trained on sequences of 131,072 basepairs, target length 896 on v3-64 TPUs for 2 and a half days without augmentations and poisson loss.
This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the [enformer-pytorch repository](https://github.com/lucidrains/enformer-pytorch).
Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.
We refer to the [paper](https://www.nature.com/articles/s41592-021-01252-x) published in Nature for details.
### How to use
Refer to the README of [enformer-pytorch](https://github.com/lucidrains/enformer-pytorch) regarding usage.
### Citation info
```
Avsec, Ž., Agarwal, V., Visentin, D. et al. Effective gene expression prediction from sequence by integrating long-range interactions. Nat Methods 18, 1196–1203 (2021). https://doi.org/10.1038/s41592-021-01252-x
``` | {"license": "apache-2.0", "inference": false} | EleutherAI/enformer-preview | null | [
"transformers",
"pytorch",
"enformer",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #enformer #license-apache-2.0 #region-us
|
# Enformer
Enformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository.
This particular model was trained on sequences of 131,072 basepairs, target length 896 on v3-64 TPUs for 2 and a half days without augmentations and poisson loss.
This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository.
Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.
We refer to the paper published in Nature for details.
### How to use
Refer to the README of enformer-pytorch regarding usage.
info
| [
"# Enformer\n\nEnformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository. \n\nThis particular model was trained on sequences of 131,072 basepairs, target length 896 on v3-64 TPUs for 2 and a half days without augmentations and poisson loss. \n\nThis repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository.\n\nDisclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nEnformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.\n\nWe refer to the paper published in Nature for details.",
"### How to use\n\nRefer to the README of enformer-pytorch regarding usage.\n\n\ninfo"
] | [
"TAGS\n#transformers #pytorch #enformer #license-apache-2.0 #region-us \n",
"# Enformer\n\nEnformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository. \n\nThis particular model was trained on sequences of 131,072 basepairs, target length 896 on v3-64 TPUs for 2 and a half days without augmentations and poisson loss. \n\nThis repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository.\n\nDisclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nEnformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.\n\nWe refer to the paper published in Nature for details.",
"### How to use\n\nRefer to the README of enformer-pytorch regarding usage.\n\n\ninfo"
] |
text-generation | transformers |
# GPT-J 6B
## Model Description
GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Intended Use and Limitations
GPT-J learns an inner representation of the English language that can be used to
extract features useful for downstream tasks. The model is best at what it was
pretrained for however, which is generating text from a prompt.
### Out-of-scope use
GPT-J-6B is **not** intended for deployment without fine-tuning, supervision,
and/or moderation. It is not a in itself a product and cannot be used for
human-facing interactions. For example, the model may generate harmful or
offensive text. Please evaluate the risks associated with your particular use case.
GPT-J-6B was trained on an English-language only dataset, and is thus **not**
suitable for translation or generating text in other languages.
GPT-J-6B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means GPT-J-6B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
```
## Training data
GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
## Training procedure
This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
## Evaluation results
<figure>
| Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) |
|--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------|
| Random Chance | ✓ | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 |
| GPT-3 Ada‡ | ✗ | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- |
| GPT-2 1.5B | ✓ | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 |
| GPT-Neo 1.3B‡ | ✓ | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 |
| Megatron-2.5B* | ✗ | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 |
| GPT-Neo 2.7B‡ | ✓ | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 |
| GPT-3 1.3B*‡ | ✗ | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 |
| GPT-3 Babbage‡ | ✗ | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- |
| Megatron-8.3B* | ✗ | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 |
| GPT-3 2.7B*‡ | ✗ | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 |
| Megatron-11B† | ✓ | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 |
| **GPT-J 6B‡** | **✓** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** |
| GPT-3 6.7B*‡ | ✗ | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 |
| GPT-3 Curie‡ | ✗ | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- |
| GPT-3 13B*‡ | ✗ | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 |
| GPT-3 175B*‡ | ✗ | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 |
| GPT-3 Davinci‡ | ✗ | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- |
<figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p>
<p><strong>*</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by
running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released
weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these
might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more
details.</p>
<p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not
reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a>
<a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>)
Thus, evaluation was not attempted.</p>
<p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models
failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is
trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure>
## Citation and Related Information
### BibTeX entry
To cite this model:
```bibtex
@misc{gpt-j,
author = {Wang, Ben and Komatsuzaki, Aran},
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
To cite the codebase that trained this model:
```bibtex
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
Thanks to everyone who have helped out one way or another (listed alphabetically):
- [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues.
- [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package.
- [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table.
- [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo.
- [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts.
- [Janko Prester](https://github.com/jprester/) for creating the web demo frontend. | {"language": ["en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm"], "datasets": ["EleutherAI/pile"]} | EleutherAI/gpt-j-6b | null | [
"transformers",
"pytorch",
"tf",
"jax",
"gptj",
"text-generation",
"causal-lm",
"en",
"dataset:EleutherAI/pile",
"arxiv:2104.09864",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.09864",
"2101.00027"
] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #gptj #text-generation #causal-lm #en #dataset-EleutherAI/pile #arxiv-2104.09864 #arxiv-2101.00027 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| GPT-J 6B
========
Model Description
-----------------
GPT-J 6B is a transformer model trained using Ben Wang's Mesh Transformer JAX. "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
**\*** Each layer consists of one feedforward block and one self attention block.
**†** Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
Intended Use and Limitations
----------------------------
GPT-J learns an inner representation of the English language that can be used to
extract features useful for downstream tasks. The model is best at what it was
pretrained for however, which is generating text from a prompt.
### Out-of-scope use
GPT-J-6B is not intended for deployment without fine-tuning, supervision,
and/or moderation. It is not a in itself a product and cannot be used for
human-facing interactions. For example, the model may generate harmful or
offensive text. Please evaluate the risks associated with your particular use case.
GPT-J-6B was trained on an English-language only dataset, and is thus not
suitable for translation or generating text in other languages.
GPT-J-6B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means GPT-J-6B will not
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### How to use
This model can be easily loaded using the 'AutoModelForCausalLM' functionality:
Training data
-------------
GPT-J 6B was trained on the Pile, a large-scale curated dataset created by EleutherAI.
Training procedure
------------------
This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
Evaluation results
------------------
Models roughly sorted by performance, or by FLOPs if not available.
**\*** Evaluation numbers reported by their respective authors. All other numbers are provided by
running [for more
details.](URL either with released
weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these
might not be directly comparable. See <a href=)
**†** Megatron-11B provides no comparable metrics, and several implementations using the released weights do not
reproduce the generation quality and evaluations. (see <a href="URL
<a href="URL <a href="URL
Thus, evaluation was not attempted.</p>
**‡** These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models
failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is
trained on the Pile, which has not been deduplicated against any test sets.
and Related Information
### BibTeX entry
To cite this model:
To cite the codebase that trained this model:
If you use this model, we would love to hear about it! Reach out on GitHub, Discord, or shoot Ben an email.
Acknowledgements
----------------
This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud, as well as the Cloud TPU team for providing early access to the Cloud TPU VM Alpha.
Thanks to everyone who have helped out one way or another (listed alphabetically):
* James Bradbury for valuable assistance with debugging JAX issues.
* Stella Biderman, Eric Hallahan, Kurumuz, and Finetune for converting the model to be compatible with the 'transformers' package.
* Leo Gao for running zero shot evaluations for the baseline models for the table.
* Laurence Golding for adding some features to the web demo.
* Aran Komatsuzaki for advice with experiment design and writing the blog posts.
* Janko Prester for creating the web demo frontend.
| [
"### Out-of-scope use\n\n\nGPT-J-6B is not intended for deployment without fine-tuning, supervision,\nand/or moderation. It is not a in itself a product and cannot be used for\nhuman-facing interactions. For example, the model may generate harmful or\noffensive text. Please evaluate the risks associated with your particular use case.\n\n\nGPT-J-6B was trained on an English-language only dataset, and is thus not\nsuitable for translation or generating text in other languages.\n\n\nGPT-J-6B has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means GPT-J-6B will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and Biases\n\n\nThe core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most \"accurate\" text. Never depend upon GPT-J to produce factually accurate output.\n\n\nGPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\n\n\nAs with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.",
"### How to use\n\n\nThis model can be easily loaded using the 'AutoModelForCausalLM' functionality:\n\n\nTraining data\n-------------\n\n\nGPT-J 6B was trained on the Pile, a large-scale curated dataset created by EleutherAI.\n\n\nTraining procedure\n------------------\n\n\nThis model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.\n\n\nEvaluation results\n------------------\n\n\n\n\nModels roughly sorted by performance, or by FLOPs if not available.\n\n\n**\\*** Evaluation numbers reported by their respective authors. All other numbers are provided by\nrunning [for more\ndetails.](URL either with released\nweights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these\nmight not be directly comparable. See <a href=)\n\n\n**†** Megatron-11B provides no comparable metrics, and several implementations using the released weights do not\nreproduce the generation quality and evaluations. (see <a href=\"URL\n<a href=\"URL <a href=\"URL\nThus, evaluation was not attempted.</p>\n**‡** These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models\nfailed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is\ntrained on the Pile, which has not been deduplicated against any test sets.\n\n\n\n\nand Related Information",
"### BibTeX entry\n\n\nTo cite this model:\n\n\nTo cite the codebase that trained this model:\n\n\nIf you use this model, we would love to hear about it! Reach out on GitHub, Discord, or shoot Ben an email.\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud, as well as the Cloud TPU team for providing early access to the Cloud TPU VM Alpha.\n\n\nThanks to everyone who have helped out one way or another (listed alphabetically):\n\n\n* James Bradbury for valuable assistance with debugging JAX issues.\n* Stella Biderman, Eric Hallahan, Kurumuz, and Finetune for converting the model to be compatible with the 'transformers' package.\n* Leo Gao for running zero shot evaluations for the baseline models for the table.\n* Laurence Golding for adding some features to the web demo.\n* Aran Komatsuzaki for advice with experiment design and writing the blog posts.\n* Janko Prester for creating the web demo frontend."
] | [
"TAGS\n#transformers #pytorch #tf #jax #gptj #text-generation #causal-lm #en #dataset-EleutherAI/pile #arxiv-2104.09864 #arxiv-2101.00027 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Out-of-scope use\n\n\nGPT-J-6B is not intended for deployment without fine-tuning, supervision,\nand/or moderation. It is not a in itself a product and cannot be used for\nhuman-facing interactions. For example, the model may generate harmful or\noffensive text. Please evaluate the risks associated with your particular use case.\n\n\nGPT-J-6B was trained on an English-language only dataset, and is thus not\nsuitable for translation or generating text in other languages.\n\n\nGPT-J-6B has not been fine-tuned for downstream contexts in which\nlanguage models are commonly deployed, such as writing genre prose,\nor commercial chatbots. This means GPT-J-6B will not\nrespond to a given prompt the way a product like ChatGPT does. This is because,\nunlike this model, ChatGPT was fine-tuned using methods such as Reinforcement\nLearning from Human Feedback (RLHF) to better “follow” human instructions.",
"### Limitations and Biases\n\n\nThe core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most \"accurate\" text. Never depend upon GPT-J to produce factually accurate output.\n\n\nGPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\n\n\nAs with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.",
"### How to use\n\n\nThis model can be easily loaded using the 'AutoModelForCausalLM' functionality:\n\n\nTraining data\n-------------\n\n\nGPT-J 6B was trained on the Pile, a large-scale curated dataset created by EleutherAI.\n\n\nTraining procedure\n------------------\n\n\nThis model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.\n\n\nEvaluation results\n------------------\n\n\n\n\nModels roughly sorted by performance, or by FLOPs if not available.\n\n\n**\\*** Evaluation numbers reported by their respective authors. All other numbers are provided by\nrunning [for more\ndetails.](URL either with released\nweights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these\nmight not be directly comparable. See <a href=)\n\n\n**†** Megatron-11B provides no comparable metrics, and several implementations using the released weights do not\nreproduce the generation quality and evaluations. (see <a href=\"URL\n<a href=\"URL <a href=\"URL\nThus, evaluation was not attempted.</p>\n**‡** These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models\nfailed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is\ntrained on the Pile, which has not been deduplicated against any test sets.\n\n\n\n\nand Related Information",
"### BibTeX entry\n\n\nTo cite this model:\n\n\nTo cite the codebase that trained this model:\n\n\nIf you use this model, we would love to hear about it! Reach out on GitHub, Discord, or shoot Ben an email.\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud, as well as the Cloud TPU team for providing early access to the Cloud TPU VM Alpha.\n\n\nThanks to everyone who have helped out one way or another (listed alphabetically):\n\n\n* James Bradbury for valuable assistance with debugging JAX issues.\n* Stella Biderman, Eric Hallahan, Kurumuz, and Finetune for converting the model to be compatible with the 'transformers' package.\n* Leo Gao for running zero shot evaluations for the baseline models for the table.\n* Laurence Golding for adding some features to the web demo.\n* Aran Komatsuzaki for advice with experiment design and writing the blog posts.\n* Janko Prester for creating the web demo frontend."
] |
text-generation | transformers |
# GPT-Neo 1.3B
## Model Description
GPT-Neo 1.3B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 1.3B represents the number of parameters of this particular pre-trained model.
## Training data
GPT-Neo 1.3B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model.
## Training procedure
This model was trained on the Pile for 380 billion tokens over 362,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss.
## Intended Use and Limitations
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B')
>>> generator("EleutherAI has", do_sample=True, min_length=50)
[{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Eval results
### Linguistic Reasoning
| Model and Size | Pile BPB | Pile PPL | Wikitext PPL | Lambada PPL | Lambada Acc | Winogrande | Hellaswag |
| ---------------- | ---------- | ---------- | ------------- | ----------- | ----------- | ---------- | ----------- |
| **GPT-Neo 1.3B** | **0.7527** | **6.159** | **13.10** | **7.498** | **57.23%** | **55.01%** | **38.66%** |
| GPT-2 1.5B | 1.0468 | ----- | 17.48 | 10.634 | 51.21% | 59.40% | 40.03% |
| GPT-Neo 2.7B | 0.7165 | 5.646 | 11.39 | 5.626 | 62.22% | 56.50% | 42.73% |
| GPT-3 Ada | 0.9631 | ----- | ----- | 9.954 | 51.60% | 52.90% | 35.93% |
### Physical and Scientific Reasoning
| Model and Size | MathQA | PubMedQA | Piqa |
| ---------------- | ---------- | ---------- | ----------- |
| **GPT-Neo 1.3B** | **24.05%** | **54.40%** | **71.11%** |
| GPT-2 1.5B | 23.64% | 58.33% | 70.78% |
| GPT-Neo 2.7B | 24.72% | 57.54% | 72.14% |
| GPT-3 Ada | 24.29% | 52.80% | 68.88% |
### Down-Stream Applications
TBD
### BibTeX entry and citation info
To cite this model, please use
```bibtex
@software{gpt-neo,
author = {Black, Sid and
Leo, Gao and
Wang, Phil and
Leahy, Connor and
Biderman, Stella},
title = {{GPT-Neo: Large Scale Autoregressive Language
Modeling with Mesh-Tensorflow}},
month = mar,
year = 2021,
note = {{If you use this software, please cite it using
these metadata.}},
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.5297715},
url = {https://doi.org/10.5281/zenodo.5297715}
}
@article{gao2020pile,
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_EleutherAI__gpt-neo-1.3B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 29.44 |
| ARC (25-shot) | 31.23 |
| HellaSwag (10-shot) | 48.47 |
| MMLU (5-shot) | 24.82 |
| TruthfulQA (0-shot) | 39.63 |
| Winogrande (5-shot) | 56.91 |
| GSM8K (5-shot) | 0.45 |
| DROP (3-shot) | 4.6 |
| {"language": ["en"], "license": "mit", "tags": ["text generation", "pytorch", "causal-lm"], "datasets": ["EleutherAI/pile"]} | EleutherAI/gpt-neo-1.3B | null | [
"transformers",
"pytorch",
"jax",
"rust",
"safetensors",
"gpt_neo",
"text-generation",
"text generation",
"causal-lm",
"en",
"dataset:EleutherAI/pile",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #jax #rust #safetensors #gpt_neo #text-generation #text generation #causal-lm #en #dataset-EleutherAI/pile #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
| GPT-Neo 1.3B
============
Model Description
-----------------
GPT-Neo 1.3B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 1.3B represents the number of parameters of this particular pre-trained model.
Training data
-------------
GPT-Neo 1.3B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model.
Training procedure
------------------
This model was trained on the Pile for 380 billion tokens over 362,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss.
Intended Use and Limitations
----------------------------
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
Eval results
------------
### Linguistic Reasoning
### Physical and Scientific Reasoning
### Down-Stream Applications
TBD
### BibTeX entry and citation info
To cite this model, please use
Open LLM Leaderboard Evaluation Results
=======================================
Detailed results can be found here
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:",
"### Limitations and Biases\n\n\nGPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.\n\n\nGPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\n\n\nAs with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n\n\nEval results\n------------",
"### Linguistic Reasoning",
"### Physical and Scientific Reasoning",
"### Down-Stream Applications\n\n\nTBD",
"### BibTeX entry and citation info\n\n\nTo cite this model, please use\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here"
] | [
"TAGS\n#transformers #pytorch #jax #rust #safetensors #gpt_neo #text-generation #text generation #causal-lm #en #dataset-EleutherAI/pile #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:",
"### Limitations and Biases\n\n\nGPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.\n\n\nGPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\n\n\nAs with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n\n\nEval results\n------------",
"### Linguistic Reasoning",
"### Physical and Scientific Reasoning",
"### Down-Stream Applications\n\n\nTBD",
"### BibTeX entry and citation info\n\n\nTo cite this model, please use\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here"
] |
text-generation | transformers |
# GPT-Neo 125M
## Model Description
GPT-Neo 125M is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 125M represents the number of parameters of this particular pre-trained model.
## Training data
GPT-Neo 125M was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model.
## Training procedure
This model was trained on the Pile for 300 billion tokens over 572,300 steps. It was trained as a masked autoregressive language model, using cross-entropy loss.
## Intended Use and Limitations
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-125M')
>>> generator("EleutherAI has", do_sample=True, min_length=20)
[{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Eval results
TBD
### Down-Stream Applications
TBD
### BibTeX entry and citation info
To cite this model, use
```bibtex
@software{gpt-neo,
author = {Black, Sid and
Leo, Gao and
Wang, Phil and
Leahy, Connor and
Biderman, Stella},
title = {{GPT-Neo: Large Scale Autoregressive Language
Modeling with Mesh-Tensorflow}},
month = mar,
year = 2021,
note = {{If you use this software, please cite it using
these metadata.}},
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.5297715},
url = {https://doi.org/10.5281/zenodo.5297715}
}
@article{gao2020pile,
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_EleutherAI__gpt-neo-125m)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 25.79 |
| ARC (25-shot) | 22.95 |
| HellaSwag (10-shot) | 30.26 |
| MMLU (5-shot) | 25.97 |
| TruthfulQA (0-shot) | 45.58 |
| Winogrande (5-shot) | 51.78 |
| GSM8K (5-shot) | 0.3 |
| DROP (3-shot) | 3.69 |
| {"language": ["en"], "license": "mit", "tags": ["text generation", "pytorch", "causal-lm"], "datasets": ["EleutherAI/pile"]} | EleutherAI/gpt-neo-125m | null | [
"transformers",
"pytorch",
"jax",
"rust",
"safetensors",
"gpt_neo",
"text-generation",
"text generation",
"causal-lm",
"en",
"dataset:EleutherAI/pile",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #jax #rust #safetensors #gpt_neo #text-generation #text generation #causal-lm #en #dataset-EleutherAI/pile #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
| GPT-Neo 125M
============
Model Description
-----------------
GPT-Neo 125M is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 125M represents the number of parameters of this particular pre-trained model.
Training data
-------------
GPT-Neo 125M was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model.
Training procedure
------------------
This model was trained on the Pile for 300 billion tokens over 572,300 steps. It was trained as a masked autoregressive language model, using cross-entropy loss.
Intended Use and Limitations
----------------------------
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
Eval results
------------
TBD
### Down-Stream Applications
TBD
### BibTeX entry and citation info
To cite this model, use
Open LLM Leaderboard Evaluation Results
=======================================
Detailed results can be found here
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:",
"### Limitations and Biases\n\n\nGPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.\n\n\nGPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\n\n\nAs with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n\n\nEval results\n------------\n\n\nTBD",
"### Down-Stream Applications\n\n\nTBD",
"### BibTeX entry and citation info\n\n\nTo cite this model, use\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here"
] | [
"TAGS\n#transformers #pytorch #jax #rust #safetensors #gpt_neo #text-generation #text generation #causal-lm #en #dataset-EleutherAI/pile #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:",
"### Limitations and Biases\n\n\nGPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.\n\n\nGPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\n\n\nAs with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n\n\nEval results\n------------\n\n\nTBD",
"### Down-Stream Applications\n\n\nTBD",
"### BibTeX entry and citation info\n\n\nTo cite this model, use\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here"
] |
text-generation | transformers |
# GPT-Neo 2.7B
## Model Description
GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model.
## Training data
GPT-Neo 2.7B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model.
## Training procedure
This model was trained for 420 billion tokens over 400,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss.
## Intended Use and Limitations
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')
>>> generator("EleutherAI has", do_sample=True, min_length=50)
[{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Eval results
All evaluations were done using our [evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness). Some results for GPT-2 and GPT-3 are inconsistent with the values reported in the respective papers. We are currently looking into why, and would greatly appreciate feedback and further testing of our eval harness. If you would like to contribute evaluations you have done, please reach out on our [Discord](https://discord.gg/vtRgjbM).
### Linguistic Reasoning
| Model and Size | Pile BPB | Pile PPL | Wikitext PPL | Lambada PPL | Lambada Acc | Winogrande | Hellaswag |
| ---------------- | ---------- | ---------- | ------------- | ----------- | ----------- | ---------- | ----------- |
| GPT-Neo 1.3B | 0.7527 | 6.159 | 13.10 | 7.498 | 57.23% | 55.01% | 38.66% |
| GPT-2 1.5B | 1.0468 | ----- | 17.48 | 10.634 | 51.21% | 59.40% | 40.03% |
| **GPT-Neo 2.7B** | **0.7165** | **5.646** | **11.39** | **5.626** | **62.22%** | **56.50%** | **42.73%** |
| GPT-3 Ada | 0.9631 | ----- | ----- | 9.954 | 51.60% | 52.90% | 35.93% |
### Physical and Scientific Reasoning
| Model and Size | MathQA | PubMedQA | Piqa |
| ---------------- | ---------- | ---------- | ----------- |
| GPT-Neo 1.3B | 24.05% | 54.40% | 71.11% |
| GPT-2 1.5B | 23.64% | 58.33% | 70.78% |
| **GPT-Neo 2.7B** | **24.72%** | **57.54%** | **72.14%** |
| GPT-3 Ada | 24.29% | 52.80% | 68.88% |
### Down-Stream Applications
TBD
### BibTeX entry and citation info
To cite this model, use
```bibtex
@software{gpt-neo,
author = {Black, Sid and
Leo, Gao and
Wang, Phil and
Leahy, Connor and
Biderman, Stella},
title = {{GPT-Neo: Large Scale Autoregressive Language
Modeling with Mesh-Tensorflow}},
month = mar,
year = 2021,
note = {{If you use this software, please cite it using
these metadata.}},
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.5297715},
url = {https://doi.org/10.5281/zenodo.5297715}
}
@article{gao2020pile,
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
``` | {"language": ["en"], "license": "mit", "tags": ["text generation", "pytorch", "causal-lm"], "datasets": ["EleutherAI/pile"]} | EleutherAI/gpt-neo-2.7B | null | [
"transformers",
"pytorch",
"jax",
"rust",
"safetensors",
"gpt_neo",
"text-generation",
"text generation",
"causal-lm",
"en",
"dataset:EleutherAI/pile",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #jax #rust #safetensors #gpt_neo #text-generation #text generation #causal-lm #en #dataset-EleutherAI/pile #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
| GPT-Neo 2.7B
============
Model Description
-----------------
GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model.
Training data
-------------
GPT-Neo 2.7B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model.
Training procedure
------------------
This model was trained for 420 billion tokens over 400,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss.
Intended Use and Limitations
----------------------------
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
Eval results
------------
All evaluations were done using our evaluation harness. Some results for GPT-2 and GPT-3 are inconsistent with the values reported in the respective papers. We are currently looking into why, and would greatly appreciate feedback and further testing of our eval harness. If you would like to contribute evaluations you have done, please reach out on our Discord.
### Linguistic Reasoning
### Physical and Scientific Reasoning
### Down-Stream Applications
TBD
### BibTeX entry and citation info
To cite this model, use
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:",
"### Limitations and Biases\n\n\nGPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.\n\n\nGPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\n\n\nAs with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n\n\nEval results\n------------\n\n\nAll evaluations were done using our evaluation harness. Some results for GPT-2 and GPT-3 are inconsistent with the values reported in the respective papers. We are currently looking into why, and would greatly appreciate feedback and further testing of our eval harness. If you would like to contribute evaluations you have done, please reach out on our Discord.",
"### Linguistic Reasoning",
"### Physical and Scientific Reasoning",
"### Down-Stream Applications\n\n\nTBD",
"### BibTeX entry and citation info\n\n\nTo cite this model, use"
] | [
"TAGS\n#transformers #pytorch #jax #rust #safetensors #gpt_neo #text-generation #text generation #causal-lm #en #dataset-EleutherAI/pile #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:",
"### Limitations and Biases\n\n\nGPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.\n\n\nGPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\n\n\nAs with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n\n\nEval results\n------------\n\n\nAll evaluations were done using our evaluation harness. Some results for GPT-2 and GPT-3 are inconsistent with the values reported in the respective papers. We are currently looking into why, and would greatly appreciate feedback and further testing of our eval harness. If you would like to contribute evaluations you have done, please reach out on our Discord.",
"### Linguistic Reasoning",
"### Physical and Scientific Reasoning",
"### Down-Stream Applications\n\n\nTBD",
"### BibTeX entry and citation info\n\n\nTo cite this model, use"
] |
text-classification | transformers | \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224).
## Usage Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-base-128")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-base-128")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([0.3598, 0.0723])
```
| {} | Elron/bleurt-base-128 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
| \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper "BLEURT: Learning Robust Metrics for Text Generation" by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from this notebook mentioned here.
## Usage Example
| [
"## BLEURT\n\nPytorch version of the original BLEURT models from ACL paper \"BLEURT: Learning Robust Metrics for Text Generation\" by \nThibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.\n\nThe code for model conversion was originated from this notebook mentioned here.",
"## Usage Example"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"## BLEURT\n\nPytorch version of the original BLEURT models from ACL paper \"BLEURT: Learning Robust Metrics for Text Generation\" by \nThibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.\n\nThe code for model conversion was originated from this notebook mentioned here.",
"## Usage Example"
] |
text-classification | transformers | \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224).
## Usage Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-base-512")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-base-512")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([1.0327, 0.2055])
```
| {} | Elron/bleurt-base-512 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
| \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper "BLEURT: Learning Robust Metrics for Text Generation" by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from this notebook mentioned here.
## Usage Example
| [
"## BLEURT\n\nPytorch version of the original BLEURT models from ACL paper \"BLEURT: Learning Robust Metrics for Text Generation\" by \nThibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.\n\nThe code for model conversion was originated from this notebook mentioned here.",
"## Usage Example"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"## BLEURT\n\nPytorch version of the original BLEURT models from ACL paper \"BLEURT: Learning Robust Metrics for Text Generation\" by \nThibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.\n\nThe code for model conversion was originated from this notebook mentioned here.",
"## Usage Example"
] |
text-classification | transformers | \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224).
## Usage Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-large-128")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-large-128")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([ 0.0020, -0.6647])
```
| {} | Elron/bleurt-large-128 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
| \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper "BLEURT: Learning Robust Metrics for Text Generation" by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from this notebook mentioned here.
## Usage Example
| [
"## BLEURT\n\nPytorch version of the original BLEURT models from ACL paper \"BLEURT: Learning Robust Metrics for Text Generation\" by \nThibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.\n\nThe code for model conversion was originated from this notebook mentioned here.",
"## Usage Example"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"## BLEURT\n\nPytorch version of the original BLEURT models from ACL paper \"BLEURT: Learning Robust Metrics for Text Generation\" by \nThibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.\n\nThe code for model conversion was originated from this notebook mentioned here.",
"## Usage Example"
] |
text-classification | transformers | ## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224).
## Usage Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-large-512")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-large-512")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([0.9877, 0.0475])
```
| {} | Elron/bleurt-large-512 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
| ## BLEURT
Pytorch version of the original BLEURT models from ACL paper "BLEURT: Learning Robust Metrics for Text Generation" by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from this notebook mentioned here.
## Usage Example
| [
"## BLEURT\n\nPytorch version of the original BLEURT models from ACL paper \"BLEURT: Learning Robust Metrics for Text Generation\" by \nThibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.\n\nThe code for model conversion was originated from this notebook mentioned here.",
"## Usage Example"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"## BLEURT\n\nPytorch version of the original BLEURT models from ACL paper \"BLEURT: Learning Robust Metrics for Text Generation\" by \nThibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.\n\nThe code for model conversion was originated from this notebook mentioned here.",
"## Usage Example"
] |
text-classification | transformers | \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224).
## Usage Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-tiny-512")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-tiny-512")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([-1.0563, -0.3004])
```
| {} | Elron/bleurt-tiny-128 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
| \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper "BLEURT: Learning Robust Metrics for Text Generation" by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from this notebook mentioned here.
## Usage Example
| [
"## BLEURT\n\nPytorch version of the original BLEURT models from ACL paper \"BLEURT: Learning Robust Metrics for Text Generation\" by \nThibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.\n\nThe code for model conversion was originated from this notebook mentioned here.",
"## Usage Example"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"## BLEURT\n\nPytorch version of the original BLEURT models from ACL paper \"BLEURT: Learning Robust Metrics for Text Generation\" by \nThibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.\n\nThe code for model conversion was originated from this notebook mentioned here.",
"## Usage Example"
] |
text-classification | transformers |
# Model Card for bleurt-tiny-512
# Model Details
## Model Description
Pytorch version of the original BLEURT models from ACL paper
- **Developed by:** Elron Bandel, Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research
- **Shared by [Optional]:** Elron Bandel
- **Model type:** Text Classification
- **Language(s) (NLP):** More information needed
- **License:** More information needed
- **Parent Model:** BERT
- **Resources for more information:**
- [GitHub Repo](https://github.com/google-research/bleurt/tree/master)
- [Associated Paper](https://aclanthology.org/2020.acl-main.704/)
- [Blog Post](https://ai.googleblog.com/2020/05/evaluating-natural-language-generation.html)
# Uses
## Direct Use
This model can be used for the task of Text Classification
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model authors note in the [associated paper](https://aclanthology.org/2020.acl-main.704.pdf):
> We use years 2017 to 2019 of the WMT Metrics Shared Task, to-English language pairs. For each year, we used the of- ficial WMT test set, which include several thou- sand pairs of sentences with human ratings from the news domain. The training sets contain 5,360, 9,492, and 147,691 records for each year.
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
The test sets for years 2018 and 2019 [of the WMT Metrics Shared Task, to-English language pairs.] are noisier,
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed.
# Citation
**BibTeX:**
```bibtex
@inproceedings{sellam2020bleurt,
title = {BLEURT: Learning Robust Metrics for Text Generation},
author = {Thibault Sellam and Dipanjan Das and Ankur P Parikh},
year = {2020},
booktitle = {Proceedings of ACL}
}
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Elron Bandel in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-tiny-512")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-tiny-512")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([-0.9414, -0.5678])
```
See [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) for model conversion code.
</details>
| {"tags": ["text-classification", "bert"]} | Elron/bleurt-tiny-512 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #pytorch #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for bleurt-tiny-512
# Model Details
## Model Description
Pytorch version of the original BLEURT models from ACL paper
- Developed by: Elron Bandel, Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research
- Shared by [Optional]: Elron Bandel
- Model type: Text Classification
- Language(s) (NLP): More information needed
- License: More information needed
- Parent Model: BERT
- Resources for more information:
- GitHub Repo
- Associated Paper
- Blog Post
# Uses
## Direct Use
This model can be used for the task of Text Classification
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model authors note in the associated paper:
> We use years 2017 to 2019 of the WMT Metrics Shared Task, to-English language pairs. For each year, we used the of- ficial WMT test set, which include several thou- sand pairs of sentences with human ratings from the news domain. The training sets contain 5,360, 9,492, and 147,691 records for each year.
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
The test sets for years 2018 and 2019 [of the WMT Metrics Shared Task, to-English language pairs.] are noisier,
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: More information needed
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed.
BibTeX:
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Elron Bandel in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
See this notebook for model conversion code.
</details>
| [
"# Model Card for bleurt-tiny-512",
"# Model Details",
"## Model Description\n \nPytorch version of the original BLEURT models from ACL paper\n \n- Developed by: Elron Bandel, Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research\n- Shared by [Optional]: Elron Bandel\n- Model type: Text Classification \n- Language(s) (NLP): More information needed\n- License: More information needed \n- Parent Model: BERT\n- Resources for more information:\n - GitHub Repo\n \t - Associated Paper\n - Blog Post",
"# Uses",
"## Direct Use\nThis model can be used for the task of Text Classification",
"## Downstream Use [Optional]\n \nMore information needed.",
"## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n \n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.",
"## Recommendations\n \n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"# Training Details",
"## Training Data\nThe model authors note in the associated paper: \n> We use years 2017 to 2019 of the WMT Metrics Shared Task, to-English language pairs. For each year, we used the of- ficial WMT test set, which include several thou- sand pairs of sentences with human ratings from the news domain. The training sets contain 5,360, 9,492, and 147,691 records for each year.",
"## Training Procedure",
"### Preprocessing\n \nMore information needed",
"### Speeds, Sizes, Times\nMore information needed",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data\n \nThe test sets for years 2018 and 2019 [of the WMT Metrics Shared Task, to-English language pairs.] are noisier,",
"### Factors\nMore information needed",
"### Metrics\n \nMore information needed",
"## Results \n \nMore information needed",
"# Model Examination\n \nMore information needed",
"# Environmental Impact\n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\n\nMore information needed",
"## Compute Infrastructure\n \nMore information needed",
"### Hardware\n \n \nMore information needed",
"### Software\n \nMore information needed.\n \nBibTeX:",
"# Glossary [optional]\nMore information needed",
"# More Information [optional]\nMore information needed",
"# Model Card Authors [optional]\n \n Elron Bandel in collaboration with Ezi Ozoani and the Hugging Face team",
"# Model Card Contact\n \nMore information needed",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n\nSee this notebook for model conversion code. \n</details>"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for bleurt-tiny-512",
"# Model Details",
"## Model Description\n \nPytorch version of the original BLEURT models from ACL paper\n \n- Developed by: Elron Bandel, Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research\n- Shared by [Optional]: Elron Bandel\n- Model type: Text Classification \n- Language(s) (NLP): More information needed\n- License: More information needed \n- Parent Model: BERT\n- Resources for more information:\n - GitHub Repo\n \t - Associated Paper\n - Blog Post",
"# Uses",
"## Direct Use\nThis model can be used for the task of Text Classification",
"## Downstream Use [Optional]\n \nMore information needed.",
"## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n \n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.",
"## Recommendations\n \n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"# Training Details",
"## Training Data\nThe model authors note in the associated paper: \n> We use years 2017 to 2019 of the WMT Metrics Shared Task, to-English language pairs. For each year, we used the of- ficial WMT test set, which include several thou- sand pairs of sentences with human ratings from the news domain. The training sets contain 5,360, 9,492, and 147,691 records for each year.",
"## Training Procedure",
"### Preprocessing\n \nMore information needed",
"### Speeds, Sizes, Times\nMore information needed",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data\n \nThe test sets for years 2018 and 2019 [of the WMT Metrics Shared Task, to-English language pairs.] are noisier,",
"### Factors\nMore information needed",
"### Metrics\n \nMore information needed",
"## Results \n \nMore information needed",
"# Model Examination\n \nMore information needed",
"# Environmental Impact\n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\n\nMore information needed",
"## Compute Infrastructure\n \nMore information needed",
"### Hardware\n \n \nMore information needed",
"### Software\n \nMore information needed.\n \nBibTeX:",
"# Glossary [optional]\nMore information needed",
"# More Information [optional]\nMore information needed",
"# Model Card Authors [optional]\n \n Elron Bandel in collaboration with Ezi Ozoani and the Hugging Face team",
"# Model Card Contact\n \nMore information needed",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n\nSee this notebook for model conversion code. \n</details>"
] |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | Elzen7/DialoGPT-medium-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model | [
"# Harry Potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
token-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 21124427
- CO2 Emissions (in grams): 6.2107269129101805
## Validation Metrics
- Loss: 0.09813392907381058
- Accuracy: 0.9714309035997062
- Precision: 0.9721275936822545
- Recall: 0.9735345807918949
- F1: 0.9728305785123967
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Emanuel/autonlp-pos-tag-bosque-21124427
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("Emanuel/autonlp-pos-tag-bosque")
tokenizer = AutoTokenizer.from_pretrained("Emanuel/autonlp-pos-tag-bosque")
inputs = tokenizer("A noiva casa de branco", return_tensors="pt")
outputs = model(**inputs)
labelids = outputs.logits.squeeze().argmax(axis=-1)
labels = [model.config.id2label[int(x)] for x in labelids]
labels = labels[1:-1]# Filter start and end of sentence symbols
``` | {"language": "pt", "tags": "autonlp", "datasets": ["Emanuel/autonlp-data-pos-tag-bosque"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 6.2107269129101805} | Emanuel/autonlp-pos-tag-bosque | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autonlp",
"pt",
"dataset:Emanuel/autonlp-data-pos-tag-bosque",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #bert #token-classification #autonlp #pt #dataset-Emanuel/autonlp-data-pos-tag-bosque #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 21124427
- CO2 Emissions (in grams): 6.2107269129101805
## Validation Metrics
- Loss: 0.09813392907381058
- Accuracy: 0.9714309035997062
- Precision: 0.9721275936822545
- Recall: 0.9735345807918949
- F1: 0.9728305785123967
## Usage
You can use cURL to access this model:
Or Python API:
| [
"# Model Trained Using AutoNLP\n\n- Problem type: Entity Extraction\n- Model ID: 21124427\n- CO2 Emissions (in grams): 6.2107269129101805",
"## Validation Metrics\n\n- Loss: 0.09813392907381058\n- Accuracy: 0.9714309035997062\n- Precision: 0.9721275936822545\n- Recall: 0.9735345807918949\n- F1: 0.9728305785123967",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] | [
"TAGS\n#transformers #pytorch #bert #token-classification #autonlp #pt #dataset-Emanuel/autonlp-data-pos-tag-bosque #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Entity Extraction\n- Model ID: 21124427\n- CO2 Emissions (in grams): 6.2107269129101805",
"## Validation Metrics\n\n- Loss: 0.09813392907381058\n- Accuracy: 0.9714309035997062\n- Precision: 0.9721275936822545\n- Recall: 0.9735345807918949\n- F1: 0.9728305785123967",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-classification | transformers |
# bertweet-emotion-base
This model is a fine-tuned version of [Bertweet](https://huggingface.co/vinai/bertweet-base). It achieves the following results on the evaluation set:
- Loss: 0.1172
- Accuracy: 0.945
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 80
- eval_batch_size: 80
- lr_scheduler_type: linear
- num_epochs: 6.0
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.15.1
- Tokenizers 0.10.3 | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy"], "model-index": [{"name": "bertweet-emotion-base", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.945, "name": "Accuracy"}]}, {"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "default", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9285, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGJhMTM3YzAyMDg0YTA1MTY4ZjMyZGY1OThjYTI0ODZlOTFlMzAwZWFkNzc3MzQ4YjNiMzViMGIxYTY4M2Q1NiIsInZlcnNpb24iOjF9.1RDEvEoO3YooUsWgDUbuRoia0PBNo6dbGn9lFiXqfeCowHQMLpagMQpBHIoofCmlQA4ZHQbBtwY5lSCzJugzBQ"}, {"type": "precision", "value": 0.8884219402987917, "name": "Precision Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjQ2YzhiZDg3ZTJlOGYzNTBlNjEzZTNhYjIyMjFiNWJiZjNjNjg0MTFjMDFjNmI4MzEyZThkMTg5YTNkMzNhZCIsInZlcnNpb24iOjF9.yjvC1cZQllxTpkW3e5bLBA5Wmk9o6xTwusDSPVOQsbapD-XZ5TG06dgG8OF7yxQWvYLEiIp5K0VxnGA645ngBw"}, {"type": "precision", "value": 0.9285, "name": "Precision Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDE4MjcwYTgxZmM2Y2M5YzUxNmVjMWMxYjUxYzMxNWJlMGMzOGY2MWZkYTRlZTFkMWUwOTE3YjI4MmE5ZGQ3YiIsInZlcnNpb24iOjF9.SD7BSPVASL91UHNj4vJ226sPAUteEXGoEF2KWc1pKhdwUh0ZBFlnMBYbaNH6Fey0M-Cc6kqQHsYyMpBbgBG0Cw"}, {"type": "precision", "value": 0.9294663182278102, "name": "Precision Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDAzMjE3M2FmMjEwMzE2ZDA4NGI3ZDI1ZDlkMjhlZmEzNTlmZWM4NjRlMDNjODIzMTE1N2JiMTE5OTA2N2EzYSIsInZlcnNpb24iOjF9.O7Y0CljPErSGKRacqPcDuzlJEOFo_cnQMqmXcW94JFeq_jWHXEqxHb8Jszi2LCQOlDmFf81Yn1gr7qNbef0lDQ"}, {"type": "recall", "value": 0.8859392810987465, "name": "Recall Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjVkODBlZTVlZmNiYjMyNDU2MDRiYWY4M2Y3MDRhNGQ0OTFlNDBiOGIwNGUxNzczMGFjMjg1YzNhNWI4N2QzMiIsInZlcnNpb24iOjF9.qBdhvXbJXKpoCQpJadg5rLlvTgfl4kitQlelAeCLNLTUyq6lBEg8onL78j2ln7m-njgF6dC0M10n4riIbTseDA"}, {"type": "recall", "value": 0.9285, "name": "Recall Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2FlYjdmOWNiODUyNmI0OWViYjc2NWNhOTVlMDkyYWMxZjIyMDJlMjZkY2I3Yjg1ZjBlOTQ3MWY4ZDI3MDEwZCIsInZlcnNpb24iOjF9.ZaZNohPIOgvh5NQe6s5PWNyxwtMlrGQxsGz_zeqKshF9btY69cNQxyg9jlfXqrdmI4XhmC8K_MIEObkbfgqCBw"}, {"type": "recall", "value": 0.9285, "name": "Recall Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWQ2ODgzMjE2MGE2MmM4OGEyNWUxMWU5OGE3N2JmYTY0MWMzM2JkNjQ3ZDkzMWJkZmU5YWFlYTJhYzg3ODI5NCIsInZlcnNpb24iOjF9.ELxb_KXB0H-SaXOW97WUkTaNzAPH6itG0BpOtvcY-3J33Kr7Wi4eLEyX1fYjgY01LbkPmH4UN-rUQz2pXoRBCQ"}, {"type": "f1", "value": 0.8863603878501328, "name": "F1 Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGYxOWRmYzVkYWE2YWRmMTY5ODFkNWU2MGYyZWZmZmIxOTQwN2E1MTJlZjFlMTAzNjNmMzM0OGM3MTAxNzNhYSIsInZlcnNpb24iOjF9.sgcxi41I9bPbli1HO0jS9tXEVIVwdmp2nw5_nG16wO-eF5R8m7uezIUbwf8SfwTDijsZPKU7n5GI1ugKKTXbCQ"}, {"type": "f1", "value": 0.9285, "name": "F1 Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWU0MGE3ZjViMzAzMTk1MzhiYjA1OTM4ZDRmZDU5NmRjODE0NThiOWY1MDVjNmU2OTI1OTAzYzY0NjY0NzMwZCIsInZlcnNpb24iOjF9.-_1WgnpD_qr18pp89fkgP651yW5YZ8Vm9i0M4gH8m8uosqOlnft8i7ppsDD5sp689aDoNjqtczPi_pGTvH8iAw"}, {"type": "f1", "value": 0.9284728367890772, "name": "F1 Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDMwZDUwYThkYWU2ZDBkYzRlZGQ2YjE2MGE2YjJjNWEyMDcwM2Y2YjY1NTE1ODNmZDgzNjdhZmI4ZjFhZTM1NCIsInZlcnNpb24iOjF9.HeNsdbp4LC3pY_ZXA55xccmAvzP3LZe6ohrSuUFBInMTyO8ZExnnk5ysiXv9AJp-O3GBamQe8LKv_mxyboErAQ"}, {"type": "loss", "value": 0.1349370777606964, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2RmN2U3YjVjNjg0NzU5NmMwOTcxM2NlMjNhNzdjMzVkMTVhYTJhNDhkMWEyMmFhZjg1NDgzODhjN2FlNzA4NiIsInZlcnNpb24iOjF9.mxi_oEnLE4QwXvm3LsT2wqa1zp7Ovul2SGpNdZjDOa0v-OWz6BfDwhNZFgQQFuls56Mi-yf9LkBevy0aNSBvAw"}]}]}]} | Emanuel/bertweet-emotion-base | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# bertweet-emotion-base
This model is a fine-tuned version of Bertweet. It achieves the following results on the evaluation set:
- Loss: 0.1172
- Accuracy: 0.945
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 80
- eval_batch_size: 80
- lr_scheduler_type: linear
- num_epochs: 6.0
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.15.1
- Tokenizers 0.10.3 | [
"# bertweet-emotion-base\n\nThis model is a fine-tuned version of Bertweet. It achieves the following results on the evaluation set:\n- Loss: 0.1172\n- Accuracy: 0.945",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 80\n- eval_batch_size: 80\n- lr_scheduler_type: linear\n- num_epochs: 6.0",
"### Framework versions\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu113\n- Datasets 1.15.1\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# bertweet-emotion-base\n\nThis model is a fine-tuned version of Bertweet. It achieves the following results on the evaluation set:\n- Loss: 0.1172\n- Accuracy: 0.945",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 80\n- eval_batch_size: 80\n- lr_scheduler_type: linear\n- num_epochs: 6.0",
"### Framework versions\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu113\n- Datasets 1.15.1\n- Tokenizers 0.10.3"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# language-modeling
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.8.1+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "language-modeling", "results": []}]} | Emanuel/roebrta-base-val-test | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# language-modeling
This model is a fine-tuned version of roberta-base on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.8.1+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
| [
"# language-modeling\n\nThis model is a fine-tuned version of roberta-base on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.4229",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- num_devices: 8\n- total_train_batch_size: 64\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.8.1+cu102\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# language-modeling\n\nThis model is a fine-tuned version of roberta-base on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.4229",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- num_devices: 8\n- total_train_batch_size: 64\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.8.1+cu102\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
text-classification | transformers |
# twitter-emotion-deberta-v3-base
This model is a fine-tuned version of [DeBERTa-v3](https://huggingface.co/microsoft/deberta-v3-base). It achieves the following results on the evaluation set:
- Loss: 0.1474
- Accuracy: 0.937
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 80
- eval_batch_size: 80
- lr_scheduler_type: linear
- num_epochs: 6.0
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.15.1
- Tokenizers 0.10.3 | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy"], "model-index": [{"name": "twitter-emotion-deberta-v3-base", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.937, "name": "Accuracy"}]}, {"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "default", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9255, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTlhZDRlN2VkOGQ0OTg3Nzg2OWJmOTAzMDYxZjk5NzE4YmMyNDIxM2FhOTgyMDI2ZTQ3ZjkyNGMwYjI4Nzc2ZiIsInZlcnNpb24iOjF9.GaEt0ZAvLf30YcCff1mZtjms1XD57bY-b00IVak3WGtZJsgVshwAP_Vla2pylTAQvZITz4WESqSlEpyu6Bn-CA"}, {"type": "precision", "value": 0.8915483806374028, "name": "Precision Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTI4MTRlN2UyMDZhODM1NWIzNzdhZTUyZjNhYjdkMmZiODRjM2ViODMzOTU4MGE1NjQ4MjM1ZWUwODQzMzk3YyIsInZlcnNpb24iOjF9.qU0v868jMD8kFNrF8CqaP0jGxLzx_ExZTJ1BIBQKEHPSv59QyDLUt6ggjL09jUcmNj-gmps2XzFO16ape0O2Ag"}, {"type": "precision", "value": 0.9255, "name": "Precision Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTY3NzgyMmFkYmY1NzU0ODM4NWVjZmI0MTgwYWU3OGY1MzI5NWRhNWMyYjM3NTQ0MzEzOWZmYTk5NDYxMjI0ZSIsInZlcnNpb24iOjF9.fnBjSgKbcOk3UF3pfn1rPbr87adek5YDTeSCqgSaCI4zzEqP_PWPNAinS1eBispGxEVh5iolmbO3frSZZ-TzDw"}, {"type": "precision", "value": 0.9286522707274408, "name": "Precision Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTE2ZmMxYzE2Mzc4OGQ2MzA1MDA3OGQ5Y2E4N2VkZDUwN2VjYmVhZGRlZTA2Nzg5NWJlZGNlMGYwNjc4YmNlYyIsInZlcnNpb24iOjF9.gRsf37CBTZpLIaAPNfdhli5cUV6K2Rbi8gHWHZydKTse9H9bkV6K_R6o_cMPhuXAyCCWx6SI-RbzInSC9K5lBw"}, {"type": "recall", "value": 0.875946770128528, "name": "Recall Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTZkNjMwOTFkZmEyYmRjNTBiOGFjYmYzYmZiMmUyY2U0ZWNhNDNmY2M3ZWZhODRjZDQ2MmFhNzZmM2ZjZDQ5OSIsInZlcnNpb24iOjF9.UTNojxmP-lR4wu13HPt7DAtgzFskdsR8IyohDDhA4sLj2_AQG7-FHdE7eE_SZ4H4FOtp-F1V-g6UoyDtFF0YCQ"}, {"type": "recall", "value": 0.9255, "name": "Recall Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjczZjBlNDhhM2YwZDJiNGEwNmMwMTE3ZDQwY2FkMjY5MGMzNjI2NDMyMmNkNTg2ZGRmMWZmOTk2OTEwNGQ0ZCIsInZlcnNpb24iOjF9.DXAXqasIV3OiJGuUGSFMIDVSsM3ailYD5rHDj9bkoDJ0duVyRQdD5l_Uxs2ILUtMYvy66HG8q9hT3oaQpDDFAQ"}, {"type": "recall", "value": 0.9255, "name": "Recall Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDZjNGRhNDhkOTY4NmU5ZWUwNTJkNTk3ZGUwZjQwMzYyZTQ3YTYxZTBjMzg3ZjY5YjUwZGM1ZmI4YzlhZmMwMiIsInZlcnNpb24iOjF9.0Jr2FqC3_4aCO7N_Cd-25rjzz2rtyI0w863DvQfVPJNPzkWrs8qaQ_3lcfcQaMbR9CiVfKYPsgWb7-dwrm-UDA"}, {"type": "f1", "value": 0.8790048313120858, "name": "F1 Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGNmMzc1MjgxZjM4Njk5ODM2NzIzOWMwYTIyN2E2NWJhYzcwNzgzMTQ0NWZjOGJhZmFkZjg5ZmNkNzYyYzdjMSIsInZlcnNpb24iOjF9.M3qaWCQwpe1vNptl5r8M62VhNe9-0eXQBZ1gIGRaEWOx9aRoTTFAqz_pl3wlhER0dSAjZlUuKElbYCI_R0KQDw"}, {"type": "f1", "value": 0.9255, "name": "F1 Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGQzNWNhOWFhZjNmYTllZTliYjRjNWVkMzgyNzE4OTcyZWIwOWY0ZTFkMjVjZDgwOTQyYWI1YzhkZjFmNWY3MiIsInZlcnNpb24iOjF9.zLzGH5b86fzDqgyM-P31QEgpVCVNXRXIxsUzWN0NinSARJDmGp0hYAKu80GwRRnCPdavIoluet1FjQaDvt6aDA"}, {"type": "f1", "value": 0.92449885920049, "name": "F1 Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTQ2OTM0ZTU1MTQyNzQxNjVkNjY3ODdkYmJhOTE0ZTYxYzhiNzM3NGFhZGRiN2FiNzM5ZjFiNzczOGZhMDU1NCIsInZlcnNpb24iOjF9.33hcbfNttHRTdGFIgtD18ywdBnihqA3W2bJnwozAnpz6A1Fh9w-kHJ7WQ51XMK_MfHBNrMOO_k_x6fNS-Wm5Dg"}, {"type": "loss", "value": 0.16804923117160797, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWYwMWY5MzFkYjM3YjZmNmE3MmFlYTI3OTQ1OWRhZTUzODM3MjYwNTgxY2IxMjQ5NmI0ZDk3NDExZjg5YjJjZiIsInZlcnNpb24iOjF9.bHYpW_rQcKjc0QsMe8yVgWo-toI-LxAZE307_8kUKxQwzzb4cvrjLR66ciel2dVSMsjt479vGpbbAXU_8vh6Dw"}]}]}]} | Emanuel/twitter-emotion-deberta-v3-base | null | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #deberta-v2 #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# twitter-emotion-deberta-v3-base
This model is a fine-tuned version of DeBERTa-v3. It achieves the following results on the evaluation set:
- Loss: 0.1474
- Accuracy: 0.937
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 80
- eval_batch_size: 80
- lr_scheduler_type: linear
- num_epochs: 6.0
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.15.1
- Tokenizers 0.10.3 | [
"# twitter-emotion-deberta-v3-base\n\nThis model is a fine-tuned version of DeBERTa-v3. It achieves the following results on the evaluation set:\n- Loss: 0.1474\n- Accuracy: 0.937",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 80\n- eval_batch_size: 80\n- lr_scheduler_type: linear\n- num_epochs: 6.0",
"### Framework versions\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu113\n- Datasets 1.15.1\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #deberta-v2 #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# twitter-emotion-deberta-v3-base\n\nThis model is a fine-tuned version of DeBERTa-v3. It achieves the following results on the evaluation set:\n- Loss: 0.1474\n- Accuracy: 0.937",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 80\n- eval_batch_size: 80\n- lr_scheduler_type: linear\n- num_epochs: 6.0",
"### Framework versions\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu113\n- Datasets 1.15.1\n- Tokenizers 0.10.3"
] |
text-generation | transformers |
# My Awesome Model | {"tags": ["conversational"]} | Emi2160/DialoGPT-small-Neku | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Awesome Model | [
"# My Awesome Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Model"
] |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | EmileAjar/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model | [
"# Harry Potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
text-generation | transformers |
# Peppa pig DialoGPT Model | {"tags": ["conversational"]} | EmileAjar/DialoGPT-small-peppapig | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Peppa pig DialoGPT Model | [
"# Peppa pig DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Peppa pig DialoGPT Model"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0603
- Precision: 0.9317
- Recall: 0.9510
- F1: 0.9413
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0872 | 1.0 | 1756 | 0.0660 | 0.9152 | 0.9350 | 0.9250 | 0.9827 |
| 0.0386 | 2.0 | 3512 | 0.0579 | 0.9374 | 0.9498 | 0.9436 | 0.9864 |
| 0.0225 | 3.0 | 5268 | 0.0603 | 0.9317 | 0.9510 | 0.9413 | 0.9866 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9317394888705688, "name": "Precision"}, {"type": "recall", "value": 0.9510265903736116, "name": "Recall"}, {"type": "f1", "value": 0.9412842508536686, "name": "F1"}, {"type": "accuracy", "value": 0.9865779713898863, "name": "Accuracy"}]}]}]} | Emmanuel/bert-finetuned-ner | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| bert-finetuned-ner
==================
This model is a fine-tuned version of bert-base-cased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0603
* Precision: 0.9317
* Recall: 0.9510
* F1: 0.9413
* Accuracy: 0.9866
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
null | null | bu benim modelim | {} | Enes3774/gpt2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| bu benim modelim | [] | [
"TAGS\n#region-us \n"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 7.9807
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 22.8021 | 1.78 | 80 | 7.9807 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xlsr-53-demo-colab", "results": []}]} | EngNada/wav2vec2-large-xlsr-53-demo-colab | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
| wav2vec2-large-xlsr-53-demo-colab
=================================
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 7.9807
* Wer: 1.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2131
- Accuracy: 0.9265
- F1: 0.9269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8031 | 1.0 | 250 | 0.2973 | 0.9125 | 0.9110 |
| 0.2418 | 2.0 | 500 | 0.2131 | 0.9265 | 0.9269 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9265, "name": "Accuracy"}, {"type": "f1", "value": 0.9268984054036417, "name": "F1"}]}]}]} | EnsarEmirali/distilbert-base-uncased-finetuned-emotion | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2131
* Accuracy: 0.9265
* F1: 0.9269
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.1
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
#Loki DialoGPT Model | {"tags": ["conversational"]} | Erikaka/DialoGPT-small-loki | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Loki DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | EstoyDePaso/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model | [
"# Harry Potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
text-generation | transformers |
# MrCobb DialoGPT Model | {"tags": ["conversational"]} | EuropeanTurtle/DialoGPT-small-mrcobb | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# MrCobb DialoGPT Model | [
"# MrCobb DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# MrCobb DialoGPT Model"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0845
- Precision: 0.8754
- Recall: 0.9058
- F1: 0.8904
- Accuracy: 0.9763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2529 | 1.0 | 878 | 0.0845 | 0.8754 | 0.9058 | 0.8904 | 0.9763 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.875445994161531, "name": "Precision"}, {"type": "recall", "value": 0.9058060185703098, "name": "Recall"}, {"type": "f1", "value": 0.8903672751264571, "name": "F1"}, {"type": "accuracy", "value": 0.9763292928971993, "name": "Accuracy"}]}]}]} | Evgeneus/distilbert-base-uncased-finetuned-ner | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-ner
=====================================
This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0845
* Precision: 0.8754
* Recall: 0.9058
* F1: 0.8904
* Accuracy: 0.9763
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
#jdt chat bot | {"tags": ["conversational"]} | ExEngineer/DialoGPT-medium-jdt | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#jdt chat bot | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Quirk DialoGPT Model | {"tags": ["conversational"]} | Exilon/DialoGPT-large-quirk | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Quirk DialoGPT Model | [
"# Quirk DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Quirk DialoGPT Model"
] |
null | null | read me | {} | EyeSeeThru/txt2img | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| read me | [] | [
"TAGS\n#region-us \n"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-russian-big-kaggle
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-russian-big-kaggle", "results": []}]} | Eyvaz/wav2vec2-base-russian-big-kaggle | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-base-russian-big-kaggle
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.13.3
- Tokenizers 0.10.3
| [
"# wav2vec2-base-russian-big-kaggle\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 12\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.9.1\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-base-russian-big-kaggle\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 12\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.9.1\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-russian-demo-kaggle
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.0102 | 1.03 | 500 | inf | 0.9997 |
| 0.0068 | 2.06 | 1000 | inf | 0.9997 |
| 0.0 | 3.09 | 1500 | inf | 0.9997 |
| 0.0313 | 4.12 | 2000 | inf | 0.9997 |
| 0.0 | 5.15 | 2500 | inf | 0.9997 |
| 0.0052 | 6.19 | 3000 | inf | 0.9997 |
| 0.0287 | 7.22 | 3500 | inf | 0.9997 |
| 0.0 | 8.25 | 4000 | inf | 0.9997 |
| 0.01 | 9.28 | 4500 | inf | 0.9997 |
| 0.0 | 10.31 | 5000 | inf | 0.9997 |
| 0.3919 | 11.34 | 5500 | inf | 0.9997 |
| 0.0 | 12.37 | 6000 | inf | 0.9997 |
| 0.0 | 13.4 | 6500 | inf | 0.9997 |
| 0.0 | 14.43 | 7000 | inf | 0.9997 |
| 0.6422 | 15.46 | 7500 | inf | 0.9997 |
| 0.0 | 16.49 | 8000 | inf | 0.9997 |
| 0.0 | 17.53 | 8500 | inf | 0.9997 |
| 0.0 | 18.56 | 9000 | inf | 0.9997 |
| 0.0 | 19.59 | 9500 | inf | 0.9997 |
| 0.0 | 20.62 | 10000 | inf | 0.9997 |
| 0.0427 | 21.65 | 10500 | inf | 0.9997 |
| 0.0 | 22.68 | 11000 | inf | 0.9997 |
| 0.0 | 23.71 | 11500 | inf | 0.9997 |
| 0.0 | 24.74 | 12000 | inf | 0.9997 |
| 0.0091 | 25.77 | 12500 | inf | 0.9997 |
| 0.1243 | 26.8 | 13000 | inf | 0.9997 |
| 0.0 | 27.83 | 13500 | inf | 0.9997 |
| 0.0 | 28.87 | 14000 | inf | 0.9997 |
| 0.0 | 29.9 | 14500 | inf | 0.9997 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-russian-demo-kaggle", "results": []}]} | Eyvaz/wav2vec2-base-russian-demo-kaggle | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| wav2vec2-base-russian-demo-kaggle
=================================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: inf
* Wer: 0.9997
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 12
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 24
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.1
* Datasets 1.13.3
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 24\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.1\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 24\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.1\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-russian-modified-kaggle
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"]} | Eyvaz/wav2vec2-base-russian-modified-kaggle | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-base-russian-modified-kaggle
This model is a fine-tuned version of facebook/wav2vec2-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.13.3
- Tokenizers 0.10.3
| [
"# wav2vec2-base-russian-modified-kaggle\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 12\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.9.1\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-base-russian-modified-kaggle\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 12\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.9.1\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
text-generation | transformers |
#house small GPT | {"tags": ["conversational"]} | EzioDD/house | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#house small GPT | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# FFF dialog model | {"tags": "conversational"} | FFF000/dialogpt-FFF | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# FFF dialog model | [
"# FFF dialog model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# FFF dialog model"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2169 | 1.0 | 8235 | 1.1950 |
| 0.9396 | 2.0 | 16470 | 1.2540 |
| 0.7567 | 3.0 | 24705 | 1.4306 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]} | FOFer/distilbert-base-uncased-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-squad
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the squad\_v2 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4306
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
fill-mask | transformers |
# HotelBERT-small
This model was trained on reviews from a well known German hotel platform.
| {"language": "de", "widget": [{"text": "Das <mask> hat sich toll um uns gek\u00fcmmert."}]} | FabianGroeger/HotelBERT-small | null | [
"transformers",
"pytorch",
"tf",
"roberta",
"fill-mask",
"de",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"de"
] | TAGS
#transformers #pytorch #tf #roberta #fill-mask #de #autotrain_compatible #endpoints_compatible #region-us
|
# HotelBERT-small
This model was trained on reviews from a well known German hotel platform.
| [
"# HotelBERT-small\n\nThis model was trained on reviews from a well known German hotel platform."
] | [
"TAGS\n#transformers #pytorch #tf #roberta #fill-mask #de #autotrain_compatible #endpoints_compatible #region-us \n",
"# HotelBERT-small\n\nThis model was trained on reviews from a well known German hotel platform."
] |
fill-mask | transformers |
# HotelBERT
This model was trained on reviews from a well known German hotel platform.
| {"language": "de", "widget": [{"text": "Das <mask> hat sich toll um uns gek\u00fcmmert."}]} | FabianGroeger/HotelBERT | null | [
"transformers",
"pytorch",
"tf",
"roberta",
"fill-mask",
"de",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"de"
] | TAGS
#transformers #pytorch #tf #roberta #fill-mask #de #autotrain_compatible #endpoints_compatible #region-us
|
# HotelBERT
This model was trained on reviews from a well known German hotel platform.
| [
"# HotelBERT\n\nThis model was trained on reviews from a well known German hotel platform."
] | [
"TAGS\n#transformers #pytorch #tf #roberta #fill-mask #de #autotrain_compatible #endpoints_compatible #region-us \n",
"# HotelBERT\n\nThis model was trained on reviews from a well known German hotel platform."
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2196
- Accuracy: 0.926
- F1: 0.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8279 | 1.0 | 250 | 0.3208 | 0.9025 | 0.8979 |
| 0.2538 | 2.0 | 500 | 0.2196 | 0.926 | 0.9258 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.926, "name": "Accuracy"}, {"type": "f1", "value": 0.9258450981645597, "name": "F1"}]}]}]} | FabioDataGeek/distilbert-base-uncased-finetuned-emotion | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2196
* Accuracy: 0.926
* F1: 0.9258
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.20.1
* Pytorch 1.12.0+cu113
* Datasets 2.3.2
* Tokenizers 0.12.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.20.1\n* Pytorch 1.12.0+cu113\n* Datasets 2.3.2\n* Tokenizers 0.12.1"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.20.1\n* Pytorch 1.12.0+cu113\n* Datasets 2.3.2\n* Tokenizers 0.12.1"
] |
text-classification | transformers | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-uncased-base
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an Reddit-dialogue dataset.
This model can be used for Text Classification: Given two sentences, see if they are related.
It achieves the following results on the evaluation set:
- Loss: 0.2297
- Accuracy: 0.9267
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 320
- eval_batch_size: 80
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.11.0
## Usage (HuggingFace Transformers)
You can use the model like this:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
# label_list
label_list = ['matched', 'unmatched']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("Fan-s/reddit-tc-bert", use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained("Fan-s/reddit-tc-bert")
# Set the input
post = "don't make gravy with asbestos."
response = "i'd expect someone with a culinary background to know that. since we're talking about school dinner ladies, they need to learn this pronto."
# Predict whether the two sentences are matched
def predict(post, response, max_seq_length=128):
with torch.no_grad():
args = (post, response)
input = tokenizer(*args, padding="max_length", max_length=max_seq_length, truncation=True, return_tensors="pt")
output = model(**input)
logits = output.logits
item = torch.argmax(logits, dim=1)
predict_label = label_list[item]
return predict_label, logits
predict_label, logits = predict(post, response)
# Matched
print("predict_label:", predict_label)
``` | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"]} | Fan-s/reddit-tc-bert | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# bert-uncased-base
This model is a fine-tuned version of bert-base-uncased on an Reddit-dialogue dataset.
This model can be used for Text Classification: Given two sentences, see if they are related.
It achieves the following results on the evaluation set:
- Loss: 0.2297
- Accuracy: 0.9267
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 320
- eval_batch_size: 80
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.11.0
## Usage (HuggingFace Transformers)
You can use the model like this:
| [
"# bert-uncased-base\n\nThis model is a fine-tuned version of bert-base-uncased on an Reddit-dialogue dataset.\nThis model can be used for Text Classification: Given two sentences, see if they are related.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2297\n- Accuracy: 0.9267",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 320\n- eval_batch_size: 80\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.0\n- Tokenizers 0.11.0",
"## Usage (HuggingFace Transformers)\nYou can use the model like this:"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-uncased-base\n\nThis model is a fine-tuned version of bert-base-uncased on an Reddit-dialogue dataset.\nThis model can be used for Text Classification: Given two sentences, see if they are related.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2297\n- Accuracy: 0.9267",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 320\n- eval_batch_size: 80\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.0\n- Tokenizers 0.11.0",
"## Usage (HuggingFace Transformers)\nYou can use the model like this:"
] |
text-generation | transformers | @Kirito DialoGPT Small Model | {"tags": ["conversational"]} | FangLee/DialoGPT-small-Kirito | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| @Kirito DialoGPT Small Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-finetuned-squad", "results": []}]} | FardinSaboori/bert-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-finetuned-squad
This model is a fine-tuned version of bert-base-cased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| [
"# bert-finetuned-squad\n\nThis model is a fine-tuned version of bert-base-cased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-finetuned-squad\n\nThis model is a fine-tuned version of bert-base-cased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 256
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-turkish-colab", "results": []}]} | FarisHijazi/wav2vec2-large-xls-r-300m-turkish-colab | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 256
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
| [
"# wav2vec2-large-xls-r-300m-turkish-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 256\n- eval_batch_size: 32\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 512\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu113\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-large-xls-r-300m-turkish-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 256\n- eval_batch_size: 32\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 512\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu113\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 32517788
- CO2 Emissions (in grams): 0.9413042739759596
## Validation Metrics
- Loss: 0.32112351059913635
- Accuracy: 0.8641304347826086
- Precision: 0.8055555555555556
- Recall: 0.8405797101449275
- AUC: 0.9493383742911153
- F1: 0.8226950354609929
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Fauzan/autonlp-judulberita-32517788
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Fauzan/autonlp-judulberita-32517788", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Fauzan/autonlp-judulberita-32517788", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "unk", "tags": "autonlp", "datasets": ["Fauzan/autonlp-data-judulberita"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 0.9413042739759596} | Fauzan/autonlp-judulberita-32517788 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"unk",
"dataset:Fauzan/autonlp-data-judulberita",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"unk"
] | TAGS
#transformers #pytorch #bert #text-classification #autonlp #unk #dataset-Fauzan/autonlp-data-judulberita #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 32517788
- CO2 Emissions (in grams): 0.9413042739759596
## Validation Metrics
- Loss: 0.32112351059913635
- Accuracy: 0.8641304347826086
- Precision: 0.8055555555555556
- Recall: 0.8405797101449275
- AUC: 0.9493383742911153
- F1: 0.8226950354609929
## Usage
You can use cURL to access this model:
Or Python API:
| [
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 32517788\n- CO2 Emissions (in grams): 0.9413042739759596",
"## Validation Metrics\n\n- Loss: 0.32112351059913635\n- Accuracy: 0.8641304347826086\n- Precision: 0.8055555555555556\n- Recall: 0.8405797101449275\n- AUC: 0.9493383742911153\n- F1: 0.8226950354609929",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #unk #dataset-Fauzan/autonlp-data-judulberita #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 32517788\n- CO2 Emissions (in grams): 0.9413042739759596",
"## Validation Metrics\n\n- Loss: 0.32112351059913635\n- Accuracy: 0.8641304347826086\n- Precision: 0.8055555555555556\n- Recall: 0.8405797101449275\n- AUC: 0.9493383742911153\n- F1: 0.8226950354609929",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-generation | transformers | This model was fine-tuned to generate horror stories in a collaborative way.
Check it out on our [repo](https://github.com/TailUFPB/storIA). | {} | Felipehonorato/storIA | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| This model was fine-tuned to generate horror stories in a collaborative way.
Check it out on our repo. | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1495
- Accuracy: 0.9385
- F1: 0.9383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1739 | 1.0 | 250 | 0.1827 | 0.931 | 0.9302 |
| 0.1176 | 2.0 | 500 | 0.1567 | 0.9325 | 0.9326 |
| 0.0994 | 3.0 | 750 | 0.1555 | 0.9385 | 0.9389 |
| 0.08 | 4.0 | 1000 | 0.1496 | 0.9445 | 0.9443 |
| 0.0654 | 5.0 | 1250 | 0.1495 | 0.9385 | 0.9383 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9385, "name": "Accuracy"}, {"type": "f1", "value": 0.9383492808338979, "name": "F1"}]}]}]} | Fengkai/distilbert-base-uncased-finetuned-emotion | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1495
* Accuracy: 0.9385
* F1: 0.9383
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.0+cu111
* Datasets 1.18.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
# GPT2-SMALL-PORTUGUESE-WIKIPEDIABIO
This is a finetuned model version of gpt2-small-portuguese(https://huggingface.co/pierreguillou/gpt2-small-portuguese) by pierreguillou.
It was trained on a person abstract dataset extracted from DBPEDIA (over 100000 people's abstracts). The model is intended as a simple and fun experiment for generating texts abstracts based on ordinary people's names. | {"language": "pt", "tags": ["pt", "wikipedia", "gpt2", "finetuning"], "datasets": ["wikipedia"], "widget": ["Andr\u00e9 Um", "Maria do Santos", "Roberto Carlos"], "licence": "mit"} | Ferch423/gpt2-small-portuguese-wikipediabio | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"pt",
"wikipedia",
"finetuning",
"dataset:wikipedia",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #jax #gpt2 #text-generation #pt #wikipedia #finetuning #dataset-wikipedia #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GPT2-SMALL-PORTUGUESE-WIKIPEDIABIO
This is a finetuned model version of gpt2-small-portuguese(URL by pierreguillou.
It was trained on a person abstract dataset extracted from DBPEDIA (over 100000 people's abstracts). The model is intended as a simple and fun experiment for generating texts abstracts based on ordinary people's names. | [
"# GPT2-SMALL-PORTUGUESE-WIKIPEDIABIO\n\n\nThis is a finetuned model version of gpt2-small-portuguese(URL by pierreguillou.\n\nIt was trained on a person abstract dataset extracted from DBPEDIA (over 100000 people's abstracts). The model is intended as a simple and fun experiment for generating texts abstracts based on ordinary people's names."
] | [
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #pt #wikipedia #finetuning #dataset-wikipedia #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GPT2-SMALL-PORTUGUESE-WIKIPEDIABIO\n\n\nThis is a finetuned model version of gpt2-small-portuguese(URL by pierreguillou.\n\nIt was trained on a person abstract dataset extracted from DBPEDIA (over 100000 people's abstracts). The model is intended as a simple and fun experiment for generating texts abstracts based on ordinary people's names."
] |
automatic-speech-recognition | espnet |
## ESPnet2 ASR model
### `Fhrozen/test_an4`
This model was trained by Fhrozen using an4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout b8df4c928e132acff78d196988bdb68a66987952
pip install -e .
cd egs2/an4/asr1
./run.sh --skip_data_prep false --skip_train true --download_model Fhrozen/test_an4
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Oct 20 00:00:46 JST 2021`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.4a1`
- pytorch version: `pytorch 1.9.0`
- Git hash: `b8df4c928e132acff78d196988bdb68a66987952`
- Commit date: `Tue Oct 19 07:48:11 2021 -0400`
## asr_train_raw_en_bpe30
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/test|130|773|4.0|22.3|73.7|0.1|96.1|100.0|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/train_dev|100|591|2.7|21.8|75.5|0.0|97.3|100.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/test|130|2565|17.2|16.4|66.4|1.0|83.8|100.0|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/train_dev|100|1915|15.5|16.4|68.1|0.9|85.5|100.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/test|130|2695|21.1|15.6|63.3|0.9|79.9|100.0|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/train_dev|100|2015|19.4|15.6|65.0|0.9|81.5|100.0|
## ASR config
<details><summary>expand</summary>
```
config: null
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_raw_en_bpe30
ngpu: 0
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: null
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 40
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models:
- 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe30/train/speech_shape
- exp/asr_stats_raw_en_bpe30/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe30/valid/speech_shape
- exp/asr_stats_raw_en_bpe30/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_nodev/wav.scp
- speech
- sound
- - dump/raw/train_nodev/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/train_dev/wav.scp
- speech
- sound
- - dump/raw/train_dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adadelta
optim_conf: {}
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ▁
- T
- E
- O
- R
- Y
- A
- H
- U
- S
- I
- F
- B
- L
- P
- D
- G
- M
- C
- V
- X
- J
- K
- Z
- W
- N
- Q
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.5
ignore_id: -1
lsm_weight: 0.0
length_normalized_loss: false
report_cer: true
report_wer: true
sym_space: <space>
sym_blank: <blank>
extract_feats_in_collect_stats: true
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram30/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe30/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: rnn
encoder_conf: {}
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf: {}
required:
- output_dir
- token_list
version: 0.10.4a1
distributed: false
```
</details>
## LM config
<details><summary>expand</summary>
```
config: conf/train_lm.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/lm_train_lm_en_bpe30
ngpu: 0
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: null
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 40
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 1
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 256
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/lm_stats_en_bpe30/train/text_shape.bpe
valid_shape_file:
- exp/lm_stats_en_bpe30/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/lm_train.txt
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/train_dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.1
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ▁
- T
- E
- O
- R
- Y
- A
- H
- U
- S
- I
- F
- B
- L
- P
- D
- G
- M
- C
- V
- X
- J
- K
- Z
- W
- N
- Q
- <sos/eos>
init: null
model_conf:
ignore_id: 0
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram30/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
lm: seq_rnn
lm_conf:
unit: 650
nlayers: 2
required:
- output_dir
- token_list
version: 0.10.4a1
distributed: false
```
</details>
| {"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["an4"]} | Fhrozen/test_an4 | null | [
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:an4",
"license:cc-by-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-an4 #license-cc-by-4.0 #region-us
| ESPnet2 ASR model
-----------------
### 'Fhrozen/test\_an4'
This model was trained by Fhrozen using an4 recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Wed Oct 20 00:00:46 JST 2021'
* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.4a1'
* pytorch version: 'pytorch 1.9.0'
* Git hash: 'b8df4c928e132acff78d196988bdb68a66987952'
+ Commit date: 'Tue Oct 19 07:48:11 2021 -0400'
asr\_train\_raw\_en\_bpe30
--------------------------
### WER
### CER
### TER
ASR config
----------
expand
LM config
---------
expand
| [
"### 'Fhrozen/test\\_an4'\n\n\nThis model was trained by Fhrozen using an4 recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Wed Oct 20 00:00:46 JST 2021'\n* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.4a1'\n* pytorch version: 'pytorch 1.9.0'\n* Git hash: 'b8df4c928e132acff78d196988bdb68a66987952'\n\t+ Commit date: 'Tue Oct 19 07:48:11 2021 -0400'\n\n\nasr\\_train\\_raw\\_en\\_bpe30\n--------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand\n\nLM config\n---------\n\n\nexpand"
] | [
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-an4 #license-cc-by-4.0 #region-us \n",
"### 'Fhrozen/test\\_an4'\n\n\nThis model was trained by Fhrozen using an4 recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Wed Oct 20 00:00:46 JST 2021'\n* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.4a1'\n* pytorch version: 'pytorch 1.9.0'\n* Git hash: 'b8df4c928e132acff78d196988bdb68a66987952'\n\t+ Commit date: 'Tue Oct 19 07:48:11 2021 -0400'\n\n\nasr\\_train\\_raw\\_en\\_bpe30\n--------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand\n\nLM config\n---------\n\n\nexpand"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0604
- Precision: 0.9291
- Recall: 0.9376
- F1: 0.9333
- Accuracy: 0.9841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2412 | 1.0 | 878 | 0.0688 | 0.9178 | 0.9246 | 0.9212 | 0.9815 |
| 0.0514 | 2.0 | 1756 | 0.0608 | 0.9251 | 0.9344 | 0.9298 | 0.9832 |
| 0.0304 | 3.0 | 2634 | 0.0604 | 0.9291 | 0.9376 | 0.9333 | 0.9841 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9290544285555925, "name": "Precision"}, {"type": "recall", "value": 0.9375769101689228, "name": "Recall"}, {"type": "f1", "value": 0.9332962138084633, "name": "F1"}, {"type": "accuracy", "value": 0.9841136193940935, "name": "Accuracy"}]}]}]} | Fiddi/distilbert-base-uncased-finetuned-ner | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-ner
=====================================
This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0604
* Precision: 0.9291
* Recall: 0.9376
* F1: 0.9333
* Accuracy: 0.9841
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.12.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
# updated PALPATINE DialoGPT Model | {"tags": ["conversational"]} | Filosofas/DialoGPT-medium-PALPATINE | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# updated PALPATINE DialoGPT Model | [
"# updated PALPATINE DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# updated PALPATINE DialoGPT Model"
] |
feature-extraction | transformers |
# ConvBERT for Finnish
Pretrained ConvBERT model on Finnish language using a replaced token detection (RTD) objective. ConvBERT was introduced in
[this paper](https://arxiv.org/abs/2008.02496)
and first released at [this page](https://github.com/yitu-opensource/ConvBert).
**Note**: this model is the ConvBERT discriminator model intented to be used for fine-tuning on downstream tasks like text classification. The ConvBERT generator model intented to be used for fill-mask task is released here [Finnish-NLP/convbert-base-generator-finnish](https://huggingface.co/Finnish-NLP/convbert-base-generator-finnish)
## Model description
Finnish ConvBERT is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN).
This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ConvBERT model as inputs.
Compared to BERT and ELECTRA models, ConvBERT model utilizes a span-based
dynamic convolution to replace some of the global self-attention heads for modeling local input sequence
dependencies. These convolution heads, together with the rest of the self-attention
heads, form a new mixed attention block that should be more efficient at both global
and local context learning.
## Intended uses & limitations
You can use the raw model for extracting features or fine-tune it to a downstream task like text classification.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import ConvBertTokenizer, ConvBertModel
import torch
tokenizer = ConvBertTokenizer.from_pretrained("Finnish-NLP/convbert-base-finnish")
model = ConvBertModel.from_pretrained("Finnish-NLP/convbert-base-finnish")
inputs = tokenizer("Joka kuuseen kurkottaa, se katajaan kapsahtaa", return_tensors="pt")
outputs = model(**inputs)
print(outputs.last_hidden_state)
```
and in TensorFlow:
```python
from transformers import ConvBertTokenizer, TFConvBertModel
tokenizer = ConvBertTokenizer.from_pretrained("Finnish-NLP/convbert-base-finnish")
model = TFConvBertModel.from_pretrained("Finnish-NLP/convbert-base-finnish")
inputs = tokenizer("Joka kuuseen kurkottaa, se katajaan kapsahtaa", return_tensors="tf")
outputs = model(inputs)
print(outputs.last_hidden_state)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish ConvBERT model was pretrained on the combination of five datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1M steps. The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.
Training code was from the official [ConvBERT repository](https://github.com/yitu-opensource/ConvBert) and also some instructions was used from [here](https://github.com/stefan-it/turkish-bert/blob/master/convbert/CHEATSHEET.md).
## Evaluation results
Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model and to our other models:
| | Average | Yle News 128 length | Yle News 512 length | Eduskunta 128 length |
|-----------------------------------------------|----------|---------------------|---------------------|----------------------|
|Finnish-NLP/convbert-base-finnish |86.98 |94.04 |95.02 |71.87 |
|Finnish-NLP/electra-base-discriminator-finnish |86.25 |93.78 |94.77 |70.20 |
|Finnish-NLP/roberta-large-wechsel-finnish |88.19 |**94.91** |95.18 |74.47 |
|Finnish-NLP/roberta-large-finnish-v2 |88.17 |94.46 |95.22 |74.83 |
|Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 |
|TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |94.90 |**95.49** |**76.07** |
To conclude, this ConvBERT model wins the ELECTRA model while losing to other models but is still fairly competitive compared to our roberta-large models when taking into account that this ConvBERT model has 106M parameters when roberta-large models have 355M parameters. ConvBERT winning the ELECTRA is also in line with the findings of the [ConvBERT paper](https://arxiv.org/abs/2008.02496).
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 | {"language": ["fi"], "license": "apache-2.0", "tags": ["finnish", "convbert"], "datasets": ["Finnish-NLP/mc4_fi_cleaned", "wikipedia"]} | Finnish-NLP/convbert-base-finnish | null | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"convbert",
"feature-extraction",
"finnish",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:2008.02496",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2008.02496"
] | [
"fi"
] | TAGS
#transformers #pytorch #tf #tensorboard #convbert #feature-extraction #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #arxiv-2008.02496 #license-apache-2.0 #endpoints_compatible #region-us
| ConvBERT for Finnish
====================
Pretrained ConvBERT model on Finnish language using a replaced token detection (RTD) objective. ConvBERT was introduced in
this paper
and first released at this page.
Note: this model is the ConvBERT discriminator model intented to be used for fine-tuning on downstream tasks like text classification. The ConvBERT generator model intented to be used for fill-mask task is released here Finnish-NLP/convbert-base-generator-finnish
Model description
-----------------
Finnish ConvBERT is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN).
This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ConvBERT model as inputs.
Compared to BERT and ELECTRA models, ConvBERT model utilizes a span-based
dynamic convolution to replace some of the global self-attention heads for modeling local input sequence
dependencies. These convolution heads, together with the rest of the self-attention
heads, form a new mixed attention block that should be more efficient at both global
and local context learning.
Intended uses & limitations
---------------------------
You can use the raw model for extracting features or fine-tune it to a downstream task like text classification.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
Training data
-------------
This Finnish ConvBERT model was pretrained on the combination of five datasets:
* mc4\_fi\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset
* Yle Finnish News Archive 2011-2018
* Finnish News Agency Archive (STT)
* The Suomi24 Sentences Corpus
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
Training procedure
------------------
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.
Training code was from the official ConvBERT repository and also some instructions was used from here.
Evaluation results
------------------
Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our other models:
To conclude, this ConvBERT model wins the ELECTRA model while losing to other models but is still fairly competitive compared to our roberta-large models when taking into account that this ConvBERT model has 106M parameters when roberta-large models have 355M parameters. ConvBERT winning the ELECTRA is also in line with the findings of the ConvBERT paper.
Acknowledgements
----------------
This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud.
Team Members
------------
* Aapo Tanskanen, Hugging Face profile, LinkedIn profile
* Rasmus Toivanen, Hugging Face profile, LinkedIn profile
Feel free to contact us for more details
| [
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThis Finnish ConvBERT model was pretrained on the combination of five datasets:\n\n\n* mc4\\_fi\\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).\n* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n* Yle Finnish News Archive 2011-2018\n* Finnish News Agency Archive (STT)\n* The Suomi24 Sentences Corpus\n\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.",
"### Pretraining\n\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.\n\n\nTraining code was from the official ConvBERT repository and also some instructions was used from here.\n\n\nEvaluation results\n------------------\n\n\nEvaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.\nWhen fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our other models:\n\n\n\nTo conclude, this ConvBERT model wins the ELECTRA model while losing to other models but is still fairly competitive compared to our roberta-large models when taking into account that this ConvBERT model has 106M parameters when roberta-large models have 355M parameters. ConvBERT winning the ELECTRA is also in line with the findings of the ConvBERT paper.\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.\n\n\nTeam Members\n------------\n\n\n* Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n* Rasmus Toivanen, Hugging Face profile, LinkedIn profile\n\n\nFeel free to contact us for more details"
] | [
"TAGS\n#transformers #pytorch #tf #tensorboard #convbert #feature-extraction #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #arxiv-2008.02496 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThis Finnish ConvBERT model was pretrained on the combination of five datasets:\n\n\n* mc4\\_fi\\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).\n* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n* Yle Finnish News Archive 2011-2018\n* Finnish News Agency Archive (STT)\n* The Suomi24 Sentences Corpus\n\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.",
"### Pretraining\n\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.\n\n\nTraining code was from the official ConvBERT repository and also some instructions was used from here.\n\n\nEvaluation results\n------------------\n\n\nEvaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.\nWhen fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our other models:\n\n\n\nTo conclude, this ConvBERT model wins the ELECTRA model while losing to other models but is still fairly competitive compared to our roberta-large models when taking into account that this ConvBERT model has 106M parameters when roberta-large models have 355M parameters. ConvBERT winning the ELECTRA is also in line with the findings of the ConvBERT paper.\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.\n\n\nTeam Members\n------------\n\n\n* Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n* Rasmus Toivanen, Hugging Face profile, LinkedIn profile\n\n\nFeel free to contact us for more details"
] |
fill-mask | transformers |
# ConvBERT for Finnish
Pretrained ConvBERT model on Finnish language using a replaced token detection (RTD) objective. ConvBERT was introduced in
[this paper](https://arxiv.org/abs/2008.02496)
and first released at [this page](https://github.com/yitu-opensource/ConvBert).
**Note**: this model is the ConvBERT generator model intented to be used for the fill-mask task. The ConvBERT discriminator model intented to be used for fine-tuning on downstream tasks like text classification is released here [Finnish-NLP/convbert-base-finnish](https://huggingface.co/Finnish-NLP/convbert-base-finnish)
## Model description
Finnish ConvBERT is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN).
This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ConvBERT model as inputs.
Compared to BERT and ELECTRA models, ConvBERT model utilizes a span-based
dynamic convolution to replace some of the global self-attention heads for modeling local input sequence
dependencies. These convolution heads, together with the rest of the self-attention
heads, form a new mixed attention block that should be more efficient at both global
and local context learning.
## Intended uses & limitations
You can use this generator model mainly just for the fill-mask task. For other tasks, check the [Finnish-NLP/convbert-base-finnish](https://huggingface.co/Finnish-NLP/convbert-base-finnish) model instead.
### How to use
Here is how to use this model directly with a pipeline for fill-mask task:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Finnish-NLP/convbert-base-generator-finnish')
>>> unmasker("Moikka olen [MASK] kielimalli.")
[{'score': 0.08341152966022491,
'token': 4619,
'token_str': 'suomalainen',
'sequence': 'Moikka olen suomalainen kielimalli.'},
{'score': 0.02831297740340233,
'token': 25583,
'token_str': 'ranskalainen',
'sequence': 'Moikka olen ranskalainen kielimalli.'},
{'score': 0.027857203036546707,
'token': 37714,
'token_str': 'kiinalainen',
'sequence': 'Moikka olen kiinalainen kielimalli.'},
{'score': 0.027701903134584427,
'token': 21614,
'token_str': 'ruotsalainen',
'sequence': 'Moikka olen ruotsalainen kielimalli.'},
{'score': 0.026388710364699364,
'token': 591,
'token_str': 'hyvä',
'sequence': 'Moikka olen hyvä kielimalli.'}]
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish ConvBERT model was pretrained on the combination of five datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1M steps. The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.
Training code was from the official [ConvBERT repository](https://github.com/yitu-opensource/ConvBert) and also some instructions was used from [here](https://github.com/stefan-it/turkish-bert/blob/master/convbert/CHEATSHEET.md).
## Evaluation results
For evaluation results, check the [Finnish-NLP/convbert-base-finnish](https://huggingface.co/Finnish-NLP/convbert-base-finnish) model repository instead.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 | {"language": ["fi"], "license": "apache-2.0", "tags": ["finnish", "convbert"], "datasets": ["Finnish-NLP/mc4_fi_cleaned", "wikipedia"], "widget": [{"text": "Moikka olen [MASK] kielimalli."}]} | Finnish-NLP/convbert-base-generator-finnish | null | [
"transformers",
"pytorch",
"convbert",
"fill-mask",
"finnish",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:2008.02496",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2008.02496"
] | [
"fi"
] | TAGS
#transformers #pytorch #convbert #fill-mask #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #arxiv-2008.02496 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# ConvBERT for Finnish
Pretrained ConvBERT model on Finnish language using a replaced token detection (RTD) objective. ConvBERT was introduced in
this paper
and first released at this page.
Note: this model is the ConvBERT generator model intented to be used for the fill-mask task. The ConvBERT discriminator model intented to be used for fine-tuning on downstream tasks like text classification is released here Finnish-NLP/convbert-base-finnish
## Model description
Finnish ConvBERT is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN).
This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ConvBERT model as inputs.
Compared to BERT and ELECTRA models, ConvBERT model utilizes a span-based
dynamic convolution to replace some of the global self-attention heads for modeling local input sequence
dependencies. These convolution heads, together with the rest of the self-attention
heads, form a new mixed attention block that should be more efficient at both global
and local context learning.
## Intended uses & limitations
You can use this generator model mainly just for the fill-mask task. For other tasks, check the Finnish-NLP/convbert-base-finnish model instead.
### How to use
Here is how to use this model directly with a pipeline for fill-mask task:
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish ConvBERT model was pretrained on the combination of five datasets:
- mc4_fi_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset
- Yle Finnish News Archive 2011-2018
- Finnish News Agency Archive (STT)
- The Suomi24 Sentences Corpus
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.
Training code was from the official ConvBERT repository and also some instructions was used from here.
## Evaluation results
For evaluation results, check the Finnish-NLP/convbert-base-finnish model repository instead.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud.
## Team Members
- Aapo Tanskanen, Hugging Face profile, LinkedIn profile
- Rasmus Toivanen, Hugging Face profile, LinkedIn profile
Feel free to contact us for more details | [
"# ConvBERT for Finnish\n\nPretrained ConvBERT model on Finnish language using a replaced token detection (RTD) objective. ConvBERT was introduced in\nthis paper\nand first released at this page.\n\nNote: this model is the ConvBERT generator model intented to be used for the fill-mask task. The ConvBERT discriminator model intented to be used for fine-tuning on downstream tasks like text classification is released here Finnish-NLP/convbert-base-finnish",
"## Model description\n\nFinnish ConvBERT is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.\n\nMore precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN).\n\nThis way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ConvBERT model as inputs.\n\nCompared to BERT and ELECTRA models, ConvBERT model utilizes a span-based\ndynamic convolution to replace some of the global self-attention heads for modeling local input sequence\ndependencies. These convolution heads, together with the rest of the self-attention\nheads, form a new mixed attention block that should be more efficient at both global\nand local context learning.",
"## Intended uses & limitations\n\nYou can use this generator model mainly just for the fill-mask task. For other tasks, check the Finnish-NLP/convbert-base-finnish model instead.",
"### How to use\n\nHere is how to use this model directly with a pipeline for fill-mask task:",
"### Limitations and bias\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.",
"## Training data\n\nThis Finnish ConvBERT model was pretrained on the combination of five datasets:\n- mc4_fi_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).\n- wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n- Yle Finnish News Archive 2011-2018\n- Finnish News Agency Archive (STT)\n- The Suomi24 Sentences Corpus\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.",
"## Training procedure",
"### Preprocessing\n\nThe texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.",
"### Pretraining\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.\n\nTraining code was from the official ConvBERT repository and also some instructions was used from here.",
"## Evaluation results\n\nFor evaluation results, check the Finnish-NLP/convbert-base-finnish model repository instead.",
"## Acknowledgements\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.",
"## Team Members\n\n- Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n- Rasmus Toivanen, Hugging Face profile, LinkedIn profile\n\nFeel free to contact us for more details"
] | [
"TAGS\n#transformers #pytorch #convbert #fill-mask #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #arxiv-2008.02496 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# ConvBERT for Finnish\n\nPretrained ConvBERT model on Finnish language using a replaced token detection (RTD) objective. ConvBERT was introduced in\nthis paper\nand first released at this page.\n\nNote: this model is the ConvBERT generator model intented to be used for the fill-mask task. The ConvBERT discriminator model intented to be used for fine-tuning on downstream tasks like text classification is released here Finnish-NLP/convbert-base-finnish",
"## Model description\n\nFinnish ConvBERT is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.\n\nMore precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN).\n\nThis way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ConvBERT model as inputs.\n\nCompared to BERT and ELECTRA models, ConvBERT model utilizes a span-based\ndynamic convolution to replace some of the global self-attention heads for modeling local input sequence\ndependencies. These convolution heads, together with the rest of the self-attention\nheads, form a new mixed attention block that should be more efficient at both global\nand local context learning.",
"## Intended uses & limitations\n\nYou can use this generator model mainly just for the fill-mask task. For other tasks, check the Finnish-NLP/convbert-base-finnish model instead.",
"### How to use\n\nHere is how to use this model directly with a pipeline for fill-mask task:",
"### Limitations and bias\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.",
"## Training data\n\nThis Finnish ConvBERT model was pretrained on the combination of five datasets:\n- mc4_fi_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).\n- wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n- Yle Finnish News Archive 2011-2018\n- Finnish News Agency Archive (STT)\n- The Suomi24 Sentences Corpus\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.",
"## Training procedure",
"### Preprocessing\n\nThe texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.",
"### Pretraining\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.\n\nTraining code was from the official ConvBERT repository and also some instructions was used from here.",
"## Evaluation results\n\nFor evaluation results, check the Finnish-NLP/convbert-base-finnish model repository instead.",
"## Acknowledgements\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.",
"## Team Members\n\n- Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n- Rasmus Toivanen, Hugging Face profile, LinkedIn profile\n\nFeel free to contact us for more details"
] |
null | transformers |
# ELECTRA for Finnish
Pretrained ELECTRA model on Finnish language using a replaced token detection (RTD) objective. ELECTRA was introduced in
[this paper](https://openreview.net/pdf?id=r1xMH1BtvB)
and first released at [this page](https://github.com/google-research/electra).
**Note**: this model is the ELECTRA discriminator model intented to be used for fine-tuning on downstream tasks like text classification. The ELECTRA generator model intented to be used for fill-mask task is released here [Finnish-NLP/electra-base-generator-finnish](https://huggingface.co/Finnish-NLP/electra-base-generator-finnish)
## Model description
Finnish ELECTRA is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN).
This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ELECTRA model as inputs.
## Intended uses & limitations
You can use the raw model for extracting features or fine-tune it to a downstream task like text classification.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import ElectraTokenizer, ElectraModel
import torch
tokenizer = ElectraTokenizer.from_pretrained("Finnish-NLP/electra-base-discriminator-finnish")
model = ElectraModel.from_pretrained("Finnish-NLP/electra-base-discriminator-finnish")
inputs = tokenizer("Joka kuuseen kurkottaa, se katajaan kapsahtaa", return_tensors="pt")
outputs = model(**inputs)
print(outputs.last_hidden_state)
```
and in TensorFlow:
```python
from transformers import ElectraTokenizer, TFElectraModel
tokenizer = ElectraTokenizer.from_pretrained("Finnish-NLP/electra-base-discriminator-finnish")
model = TFElectraModel.from_pretrained("Finnish-NLP/electra-base-discriminator-finnish", from_pt=True)
inputs = tokenizer("Joka kuuseen kurkottaa, se katajaan kapsahtaa", return_tensors="tf")
outputs = model(inputs)
print(outputs.last_hidden_state)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish ELECTRA model was pretrained on the combination of five datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1M steps. The optimizer used was a AdamW with learning rate 2e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.
Training code was from the official [ELECTRA repository](https://github.com/google-research/electra) and also some instructions was used from [here](https://github.com/stefan-it/turkish-bert/blob/master/electra/CHEATSHEET.md).
## Evaluation results
Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model and to our other models:
| | Average | Yle News 128 length | Yle News 512 length | Eduskunta 128 length |
|-----------------------------------------------|----------|---------------------|---------------------|----------------------|
|Finnish-NLP/electra-base-discriminator-finnish |86.25 |93.78 |94.77 |70.20 |
|Finnish-NLP/convbert-base-finnish |86.98 |94.04 |95.02 |71.87 |
|Finnish-NLP/roberta-large-wechsel-finnish |88.19 |**94.91** |95.18 |74.47 |
|Finnish-NLP/roberta-large-finnish-v2 |88.17 |94.46 |95.22 |74.83 |
|Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 |
|TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |94.90 |**95.49** |**76.07** |
To conclude, this ELECTRA model loses to other models but is still fairly competitive compared to our roberta-large models when taking into account that this ELECTRA model has 110M parameters when roberta-large models have 355M parameters.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 | {"language": ["fi"], "license": "apache-2.0", "tags": ["finnish", "electra"], "datasets": ["Finnish-NLP/mc4_fi_cleaned", "wikipedia"]} | Finnish-NLP/electra-base-discriminator-finnish | null | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"pretraining",
"finnish",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fi"
] | TAGS
#transformers #pytorch #tensorboard #electra #pretraining #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #license-apache-2.0 #endpoints_compatible #region-us
| ELECTRA for Finnish
===================
Pretrained ELECTRA model on Finnish language using a replaced token detection (RTD) objective. ELECTRA was introduced in
this paper
and first released at this page.
Note: this model is the ELECTRA discriminator model intented to be used for fine-tuning on downstream tasks like text classification. The ELECTRA generator model intented to be used for fill-mask task is released here Finnish-NLP/electra-base-generator-finnish
Model description
-----------------
Finnish ELECTRA is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN).
This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ELECTRA model as inputs.
Intended uses & limitations
---------------------------
You can use the raw model for extracting features or fine-tune it to a downstream task like text classification.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
Training data
-------------
This Finnish ELECTRA model was pretrained on the combination of five datasets:
* mc4\_fi\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset
* Yle Finnish News Archive 2011-2018
* Finnish News Agency Archive (STT)
* The Suomi24 Sentences Corpus
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
Training procedure
------------------
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 2e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.
Training code was from the official ELECTRA repository and also some instructions was used from here.
Evaluation results
------------------
Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our other models:
To conclude, this ELECTRA model loses to other models but is still fairly competitive compared to our roberta-large models when taking into account that this ELECTRA model has 110M parameters when roberta-large models have 355M parameters.
Acknowledgements
----------------
This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud.
Team Members
------------
* Aapo Tanskanen, Hugging Face profile, LinkedIn profile
* Rasmus Toivanen, Hugging Face profile, LinkedIn profile
Feel free to contact us for more details
| [
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThis Finnish ELECTRA model was pretrained on the combination of five datasets:\n\n\n* mc4\\_fi\\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).\n* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n* Yle Finnish News Archive 2011-2018\n* Finnish News Agency Archive (STT)\n* The Suomi24 Sentences Corpus\n\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.",
"### Pretraining\n\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 2e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.\n\n\nTraining code was from the official ELECTRA repository and also some instructions was used from here.\n\n\nEvaluation results\n------------------\n\n\nEvaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.\nWhen fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our other models:\n\n\n\nTo conclude, this ELECTRA model loses to other models but is still fairly competitive compared to our roberta-large models when taking into account that this ELECTRA model has 110M parameters when roberta-large models have 355M parameters.\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.\n\n\nTeam Members\n------------\n\n\n* Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n* Rasmus Toivanen, Hugging Face profile, LinkedIn profile\n\n\nFeel free to contact us for more details"
] | [
"TAGS\n#transformers #pytorch #tensorboard #electra #pretraining #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #license-apache-2.0 #endpoints_compatible #region-us \n",
"### How to use\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThis Finnish ELECTRA model was pretrained on the combination of five datasets:\n\n\n* mc4\\_fi\\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).\n* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n* Yle Finnish News Archive 2011-2018\n* Finnish News Agency Archive (STT)\n* The Suomi24 Sentences Corpus\n\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.",
"### Pretraining\n\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 2e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.\n\n\nTraining code was from the official ELECTRA repository and also some instructions was used from here.\n\n\nEvaluation results\n------------------\n\n\nEvaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.\nWhen fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our other models:\n\n\n\nTo conclude, this ELECTRA model loses to other models but is still fairly competitive compared to our roberta-large models when taking into account that this ELECTRA model has 110M parameters when roberta-large models have 355M parameters.\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.\n\n\nTeam Members\n------------\n\n\n* Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n* Rasmus Toivanen, Hugging Face profile, LinkedIn profile\n\n\nFeel free to contact us for more details"
] |
fill-mask | transformers |
# ELECTRA for Finnish
Pretrained ELECTRA model on Finnish language using a replaced token detection (RTD) objective. ELECTRA was introduced in
[this paper](https://openreview.net/pdf?id=r1xMH1BtvB)
and first released at [this page](https://github.com/google-research/electra).
**Note**: this model is the ELECTRA generator model intented to be used for the fill-mask task. The ELECTRA discriminator model intented to be used for fine-tuning on downstream tasks like text classification is released here [Finnish-NLP/electra-base-discriminator-finnish](https://huggingface.co/Finnish-NLP/electra-base-discriminator-finnish)
## Model description
Finnish ELECTRA is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN).
This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ELECTRA model as inputs.
## Intended uses & limitations
You can use this generator model mainly just for the fill-mask task. For other tasks, check the [Finnish-NLP/electra-base-discriminator-finnish](https://huggingface.co/Finnish-NLP/electra-base-discriminator-finnish) model instead.
### How to use
Here is how to use this model directly with a pipeline for fill-mask task:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Finnish-NLP/electra-base-generator-finnish')
>>> unmasker("Moikka olen [MASK] kielimalli.")
[{'score': 0.0708453431725502,
'token': 4619,
'token_str': 'suomalainen',
'sequence': 'Moikka olen suomalainen kielimalli.'},
{'score': 0.042563650757074356,
'token': 1153,
'token_str': 'uusi',
'sequence': 'Moikka olen uusi kielimalli.'},
{'score': 0.03219178691506386,
'token': 591,
'token_str': 'hyvä',
'sequence': 'Moikka olen hyvä kielimalli.'},
{'score': 0.03175133094191551,
'token': 3134,
'token_str': 'vanha',
'sequence': 'Moikka olen vanha kielimalli.'},
{'score': 0.019662367179989815,
'token': 25583,
'token_str': 'ranskalainen',
'sequence': 'Moikka olen ranskalainen kielimalli.'}]
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish ELECTRA model was pretrained on the combination of five datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1M steps. The optimizer used was a AdamW with learning rate 2e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.
Training code was from the official [ELECTRA repository](https://github.com/google-research/electra) and also some instructions was used from [here](https://github.com/stefan-it/turkish-bert/blob/master/electra/CHEATSHEET.md).
## Evaluation results
For evaluation results, check the [Finnish-NLP/electra-base-discriminator-finnish](https://huggingface.co/Finnish-NLP/electra-base-discriminator-finnish) model repository instead.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 | {"language": ["fi"], "license": "apache-2.0", "tags": ["finnish", "electra"], "datasets": ["Finnish-NLP/mc4_fi_cleaned", "wikipedia"], "widget": [{"text": "Moikka olen [MASK] kielimalli."}]} | Finnish-NLP/electra-base-generator-finnish | null | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"finnish",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fi"
] | TAGS
#transformers #pytorch #electra #fill-mask #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# ELECTRA for Finnish
Pretrained ELECTRA model on Finnish language using a replaced token detection (RTD) objective. ELECTRA was introduced in
this paper
and first released at this page.
Note: this model is the ELECTRA generator model intented to be used for the fill-mask task. The ELECTRA discriminator model intented to be used for fine-tuning on downstream tasks like text classification is released here Finnish-NLP/electra-base-discriminator-finnish
## Model description
Finnish ELECTRA is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN).
This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ELECTRA model as inputs.
## Intended uses & limitations
You can use this generator model mainly just for the fill-mask task. For other tasks, check the Finnish-NLP/electra-base-discriminator-finnish model instead.
### How to use
Here is how to use this model directly with a pipeline for fill-mask task:
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish ELECTRA model was pretrained on the combination of five datasets:
- mc4_fi_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset
- Yle Finnish News Archive 2011-2018
- Finnish News Agency Archive (STT)
- The Suomi24 Sentences Corpus
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 2e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.
Training code was from the official ELECTRA repository and also some instructions was used from here.
## Evaluation results
For evaluation results, check the Finnish-NLP/electra-base-discriminator-finnish model repository instead.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud.
## Team Members
- Aapo Tanskanen, Hugging Face profile, LinkedIn profile
- Rasmus Toivanen, Hugging Face profile, LinkedIn profile
Feel free to contact us for more details | [
"# ELECTRA for Finnish\n\nPretrained ELECTRA model on Finnish language using a replaced token detection (RTD) objective. ELECTRA was introduced in\nthis paper\nand first released at this page.\n\nNote: this model is the ELECTRA generator model intented to be used for the fill-mask task. The ELECTRA discriminator model intented to be used for fine-tuning on downstream tasks like text classification is released here Finnish-NLP/electra-base-discriminator-finnish",
"## Model description\n\nFinnish ELECTRA is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.\n\nMore precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN).\n\nThis way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ELECTRA model as inputs.",
"## Intended uses & limitations\n\nYou can use this generator model mainly just for the fill-mask task. For other tasks, check the Finnish-NLP/electra-base-discriminator-finnish model instead.",
"### How to use\n\nHere is how to use this model directly with a pipeline for fill-mask task:",
"### Limitations and bias\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.",
"## Training data\n\nThis Finnish ELECTRA model was pretrained on the combination of five datasets:\n- mc4_fi_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).\n- wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n- Yle Finnish News Archive 2011-2018\n- Finnish News Agency Archive (STT)\n- The Suomi24 Sentences Corpus\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.",
"## Training procedure",
"### Preprocessing\n\nThe texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.",
"### Pretraining\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 2e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.\n\nTraining code was from the official ELECTRA repository and also some instructions was used from here.",
"## Evaluation results\n\nFor evaluation results, check the Finnish-NLP/electra-base-discriminator-finnish model repository instead.",
"## Acknowledgements\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.",
"## Team Members\n\n- Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n- Rasmus Toivanen, Hugging Face profile, LinkedIn profile\n\nFeel free to contact us for more details"
] | [
"TAGS\n#transformers #pytorch #electra #fill-mask #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# ELECTRA for Finnish\n\nPretrained ELECTRA model on Finnish language using a replaced token detection (RTD) objective. ELECTRA was introduced in\nthis paper\nand first released at this page.\n\nNote: this model is the ELECTRA generator model intented to be used for the fill-mask task. The ELECTRA discriminator model intented to be used for fine-tuning on downstream tasks like text classification is released here Finnish-NLP/electra-base-discriminator-finnish",
"## Model description\n\nFinnish ELECTRA is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.\n\nMore precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN).\n\nThis way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ELECTRA model as inputs.",
"## Intended uses & limitations\n\nYou can use this generator model mainly just for the fill-mask task. For other tasks, check the Finnish-NLP/electra-base-discriminator-finnish model instead.",
"### How to use\n\nHere is how to use this model directly with a pipeline for fill-mask task:",
"### Limitations and bias\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.",
"## Training data\n\nThis Finnish ELECTRA model was pretrained on the combination of five datasets:\n- mc4_fi_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).\n- wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n- Yle Finnish News Archive 2011-2018\n- Finnish News Agency Archive (STT)\n- The Suomi24 Sentences Corpus\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.",
"## Training procedure",
"### Preprocessing\n\nThe texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.",
"### Pretraining\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 2e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.\n\nTraining code was from the official ELECTRA repository and also some instructions was used from here.",
"## Evaluation results\n\nFor evaluation results, check the Finnish-NLP/electra-base-discriminator-finnish model repository instead.",
"## Acknowledgements\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.",
"## Team Members\n\n- Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n- Rasmus Toivanen, Hugging Face profile, LinkedIn profile\n\nFeel free to contact us for more details"
] |
text-generation | transformers |
# GPT-2 for Finnish
Pretrained GPT-2 model on Finnish language using a causal language modeling (CLM) objective. GPT-2 was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
**Note**: this model is quite small 117M parameter variant as in Huggingface's [GPT-2 config](https://huggingface.co/gpt2), so not the famous big 1.5B parameter variant by OpenAI. We also have bigger 345M parameter variant [gpt2-medium-finnish](https://huggingface.co/Finnish-NLP/gpt2-medium-finnish) and 774M parameter variant [gpt2-large-finnish](https://huggingface.co/Finnish-NLP/gpt2-large-finnish) available which perform better compared to this model.
## Model description
Finnish GPT-2 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation:
```python
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='Finnish-NLP/gpt2-finnish')
>>> generator("Tekstiä tuottava tekoäly on", max_length=30, num_return_sequences=5)
[{'generated_text': 'Tekstiä tuottava tekoäly on kuin onkin hyvin pieni. Sitä voi käyttää myös hyvin nopeasti ja myös täysin automatisoituna, eikä sitä tarvitse käydä läpi. Se'},
{'generated_text': 'Tekstiä tuottava tekoäly on saanut jalansijaa, mutta Suomessa se on jo ehtinyt hajota käsiin, koska sen avulla ei pystytä tuottamaan täysin ajantasaisia'},
{'generated_text': 'Tekstiä tuottava tekoäly on tehnyt työtä kymmenien vuosien ajan ja ottanut käyttöön jo yli kahden vuosikymmenen ajan tekoälyn ratkaisuja. Tekoäly on jo pitkään tehnyt työtä'},
{'generated_text': 'Tekstiä tuottava tekoäly on tekoälyn sovellus, jota käytetään esimerkiksi liiketoiminnan ja päätöksenteon tukena. Työhön liittyy data-analyysin ohella tekoälyn avulla esimerkiksi tekoäl'},
{'generated_text': 'Tekstiä tuottava tekoäly on juuri nyt erityisen hyödyllinen, koska se tunnistaa käyttäjän tietokoneen ruudulla olevat ilmoitukset, kuten näytön värin ja osoittimet ilman välkyn'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('Finnish-NLP/gpt2-finnish')
model = GPT2Model.from_pretrained('Finnish-NLP/gpt2-finnish')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('Finnish-NLP/gpt2-finnish')
model = TFGPT2Model.from_pretrained('Finnish-NLP/gpt2-finnish', from_pt=True)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
As with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Training data
This Finnish GPT-2 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 300k steps (a bit over 2 epochs, 256 batch size). The optimizer used was a second-order optimization method called [Distributed Shampoo](https://github.com/google-research/google-research/tree/master/scalable_shampoo) with learning rate 1e-4, learning rate warmup for 4000 steps and cosine decay of the learning rate after.
At first, commonly used Adam optimizer was tried but there were significant issues getting the model to converge even with multiple different learning rate trials so then Adam optimizer was replaced with the Distributed Shampoo which worked a lot better.
## Evaluation results
Evaluation was done using the *validation* split of the [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned) dataset with [Perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) loses to our bigger model variants.
| | Perplexity |
|------------------------------------------|------------|
|Finnish-NLP/gpt2-finnish |44.19 |
|Finnish-NLP/gpt2-medium-finnish |34.08 |
|Finnish-NLP/gpt2-large-finnish |**30.74** |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
| {"language": ["fi"], "license": "apache-2.0", "tags": ["finnish", "gpt2"], "datasets": ["Finnish-NLP/mc4_fi_cleaned", "wikipedia"], "widget": [{"text": "Teksti\u00e4 tuottava teko\u00e4ly on"}]} | Finnish-NLP/gpt2-finnish | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"finnish",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fi"
] | TAGS
#transformers #pytorch #jax #tensorboard #gpt2 #text-generation #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
| GPT-2 for Finnish
=================
Pretrained GPT-2 model on Finnish language using a causal language modeling (CLM) objective. GPT-2 was introduced in
this paper
and first released at this page.
Note: this model is quite small 117M parameter variant as in Huggingface's GPT-2 config, so not the famous big 1.5B parameter variant by OpenAI. We also have bigger 345M parameter variant gpt2-medium-finnish and 774M parameter variant gpt2-large-finnish available which perform better compared to this model.
Model description
-----------------
Finnish GPT-2 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
Intended uses & limitations
---------------------------
You can use the raw model for text generation or fine-tune it to a downstream task. See the
model hub to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
As with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
Training data
-------------
This Finnish GPT-2 model was pretrained on the combination of six datasets:
* mc4\_fi\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset
* Yle Finnish News Archive 2011-2018
* Yle Finnish News Archive 2019-2020
* Finnish News Agency Archive (STT)
* The Suomi24 Sentences Corpus
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
Training procedure
------------------
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 300k steps (a bit over 2 epochs, 256 batch size). The optimizer used was a second-order optimization method called Distributed Shampoo with learning rate 1e-4, learning rate warmup for 4000 steps and cosine decay of the learning rate after.
At first, commonly used Adam optimizer was tried but there were significant issues getting the model to converge even with multiple different learning rate trials so then Adam optimizer was replaced with the Distributed Shampoo which worked a lot better.
Evaluation results
------------------
Evaluation was done using the *validation* split of the mc4\_fi\_cleaned dataset with Perplexity (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) loses to our bigger model variants.
Acknowledgements
----------------
This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud.
Team Members
------------
* Aapo Tanskanen, Hugging Face profile, LinkedIn profile
* Rasmus Toivanen, Hugging Face profile, LinkedIn profile
Feel free to contact us for more details
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.\n\n\nAs with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n\n\nTraining data\n-------------\n\n\nThis Finnish GPT-2 model was pretrained on the combination of six datasets:\n\n\n* mc4\\_fi\\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).\n* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n* Yle Finnish News Archive 2011-2018\n* Yle Finnish News Archive 2019-2020\n* Finnish News Agency Archive (STT)\n* The Suomi24 Sentences Corpus\n\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens.",
"### Pretraining\n\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 300k steps (a bit over 2 epochs, 256 batch size). The optimizer used was a second-order optimization method called Distributed Shampoo with learning rate 1e-4, learning rate warmup for 4000 steps and cosine decay of the learning rate after.\n\n\nAt first, commonly used Adam optimizer was tried but there were significant issues getting the model to converge even with multiple different learning rate trials so then Adam optimizer was replaced with the Distributed Shampoo which worked a lot better.\n\n\nEvaluation results\n------------------\n\n\nEvaluation was done using the *validation* split of the mc4\\_fi\\_cleaned dataset with Perplexity (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) loses to our bigger model variants.\n\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.\n\n\nTeam Members\n------------\n\n\n* Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n* Rasmus Toivanen, Hugging Face profile, LinkedIn profile\n\n\nFeel free to contact us for more details"
] | [
"TAGS\n#transformers #pytorch #jax #tensorboard #gpt2 #text-generation #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.\n\n\nAs with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n\n\nTraining data\n-------------\n\n\nThis Finnish GPT-2 model was pretrained on the combination of six datasets:\n\n\n* mc4\\_fi\\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).\n* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n* Yle Finnish News Archive 2011-2018\n* Yle Finnish News Archive 2019-2020\n* Finnish News Agency Archive (STT)\n* The Suomi24 Sentences Corpus\n\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens.",
"### Pretraining\n\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 300k steps (a bit over 2 epochs, 256 batch size). The optimizer used was a second-order optimization method called Distributed Shampoo with learning rate 1e-4, learning rate warmup for 4000 steps and cosine decay of the learning rate after.\n\n\nAt first, commonly used Adam optimizer was tried but there were significant issues getting the model to converge even with multiple different learning rate trials so then Adam optimizer was replaced with the Distributed Shampoo which worked a lot better.\n\n\nEvaluation results\n------------------\n\n\nEvaluation was done using the *validation* split of the mc4\\_fi\\_cleaned dataset with Perplexity (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) loses to our bigger model variants.\n\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.\n\n\nTeam Members\n------------\n\n\n* Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n* Rasmus Toivanen, Hugging Face profile, LinkedIn profile\n\n\nFeel free to contact us for more details"
] |
text-generation | transformers |
# GPT-2 large for Finnish
Pretrained GPT-2 large model on Finnish language using a causal language modeling (CLM) objective. GPT-2 was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
**Note**: this model is 774M parameter variant as in Huggingface's [GPT-2-large config](https://huggingface.co/gpt2-large), so not the famous big 1.5B parameter variant by OpenAI.
## Model description
Finnish GPT-2 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation:
```python
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='Finnish-NLP/gpt2-large-finnish')
>>> generator("Tekstiä tuottava tekoäly on", max_length=30, num_return_sequences=5)
[{'generated_text': 'Tekstiä tuottava tekoäly on valmis yhteistyöhön ihmisen kanssa: Tekoäly hoitaa ihmisen puolesta tekstin tuottamisen. Se myös ymmärtää, missä vaiheessa tekstiä voidaan alkaa kirjoittamaan'},
{'generated_text': 'Tekstiä tuottava tekoäly on älykäs, mutta se ei ole vain älykkäisiin koneisiin kuuluva älykäs olento, vaan se on myös kone. Se ei'},
{'generated_text': 'Tekstiä tuottava tekoäly on ehkä jo pian todellisuutta - se voisi tehdä myös vanhustenhoidosta nykyistä ä tuottava tekoäly on ehkä jo pian todellisuutta - se voisi tehdä'},
{'generated_text': 'Tekstiä tuottava tekoäly on kehitetty ihmisen ja ihmisen aivoihin yhteistyössä neurotieteiden ja käyttäytymistieteen tutkijatiimin kanssa. Uusi teknologia avaa aivan uudenlaisia tutkimusi'},
{'generated_text': 'Tekstiä tuottava tekoäly on kuin tietokone, jonka kanssa voi elää. Tekoälyn avulla voi kirjoittaa mitä tahansa, mistä tahansa ja miten paljon. Tässä'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('Finnish-NLP/gpt2-large-finnish')
model = GPT2Model.from_pretrained('Finnish-NLP/gpt2-large-finnish')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('Finnish-NLP/gpt2-large-finnish')
model = TFGPT2Model.from_pretrained('Finnish-NLP/gpt2-large-finnish', from_pt=True)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
As with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Training data
This Finnish GPT-2 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 640k steps (a bit over 1 epoch, 64 batch size). The optimizer used was a AdamW with learning rate 4e-5, learning rate warmup for 4000 steps and cosine decay of the learning rate after.
## Evaluation results
Evaluation was done using the *validation* split of the [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned) dataset with [Perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) performs better than our smaller model variants.
| | Perplexity |
|------------------------------------------|------------|
|Finnish-NLP/gpt2-large-finnish |**30.74** |
|Finnish-NLP/gpt2-medium-finnish |34.08 |
|Finnish-NLP/gpt2-finnish |44.19 |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 | {"language": ["fi"], "license": "apache-2.0", "tags": ["finnish", "gpt2"], "datasets": ["Finnish-NLP/mc4_fi_cleaned", "wikipedia"], "widget": [{"text": "Teksti\u00e4 tuottava teko\u00e4ly on"}]} | Finnish-NLP/gpt2-large-finnish | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"finnish",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fi"
] | TAGS
#transformers #pytorch #jax #tensorboard #gpt2 #text-generation #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| GPT-2 large for Finnish
=======================
Pretrained GPT-2 large model on Finnish language using a causal language modeling (CLM) objective. GPT-2 was introduced in
this paper
and first released at this page.
Note: this model is 774M parameter variant as in Huggingface's GPT-2-large config, so not the famous big 1.5B parameter variant by OpenAI.
Model description
-----------------
Finnish GPT-2 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
Intended uses & limitations
---------------------------
You can use the raw model for text generation or fine-tune it to a downstream task. See the
model hub to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
As with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
Training data
-------------
This Finnish GPT-2 model was pretrained on the combination of six datasets:
* mc4\_fi\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset
* Yle Finnish News Archive 2011-2018
* Yle Finnish News Archive 2019-2020
* Finnish News Agency Archive (STT)
* The Suomi24 Sentences Corpus
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
Training procedure
------------------
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 640k steps (a bit over 1 epoch, 64 batch size). The optimizer used was a AdamW with learning rate 4e-5, learning rate warmup for 4000 steps and cosine decay of the learning rate after.
Evaluation results
------------------
Evaluation was done using the *validation* split of the mc4\_fi\_cleaned dataset with Perplexity (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) performs better than our smaller model variants.
Acknowledgements
----------------
This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud.
Team Members
------------
* Aapo Tanskanen, Hugging Face profile, LinkedIn profile
* Rasmus Toivanen, Hugging Face profile, LinkedIn profile
Feel free to contact us for more details
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.\n\n\nAs with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n\n\nTraining data\n-------------\n\n\nThis Finnish GPT-2 model was pretrained on the combination of six datasets:\n\n\n* mc4\\_fi\\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).\n* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n* Yle Finnish News Archive 2011-2018\n* Yle Finnish News Archive 2019-2020\n* Finnish News Agency Archive (STT)\n* The Suomi24 Sentences Corpus\n\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens.",
"### Pretraining\n\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 640k steps (a bit over 1 epoch, 64 batch size). The optimizer used was a AdamW with learning rate 4e-5, learning rate warmup for 4000 steps and cosine decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nEvaluation was done using the *validation* split of the mc4\\_fi\\_cleaned dataset with Perplexity (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) performs better than our smaller model variants.\n\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.\n\n\nTeam Members\n------------\n\n\n* Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n* Rasmus Toivanen, Hugging Face profile, LinkedIn profile\n\n\nFeel free to contact us for more details"
] | [
"TAGS\n#transformers #pytorch #jax #tensorboard #gpt2 #text-generation #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.\n\n\nAs with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n\n\nTraining data\n-------------\n\n\nThis Finnish GPT-2 model was pretrained on the combination of six datasets:\n\n\n* mc4\\_fi\\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).\n* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n* Yle Finnish News Archive 2011-2018\n* Yle Finnish News Archive 2019-2020\n* Finnish News Agency Archive (STT)\n* The Suomi24 Sentences Corpus\n\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens.",
"### Pretraining\n\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 640k steps (a bit over 1 epoch, 64 batch size). The optimizer used was a AdamW with learning rate 4e-5, learning rate warmup for 4000 steps and cosine decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nEvaluation was done using the *validation* split of the mc4\\_fi\\_cleaned dataset with Perplexity (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) performs better than our smaller model variants.\n\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.\n\n\nTeam Members\n------------\n\n\n* Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n* Rasmus Toivanen, Hugging Face profile, LinkedIn profile\n\n\nFeel free to contact us for more details"
] |
text-generation | transformers |
# GPT-2 medium for Finnish
Pretrained GPT-2 medium model on Finnish language using a causal language modeling (CLM) objective. GPT-2 was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
**Note**: this model is 345M parameter variant as in Huggingface's [GPT-2-medium config](https://huggingface.co/gpt2-medium), so not the famous big 1.5B parameter variant by OpenAI. We also have bigger 774M parameter variant [gpt2-large-finnish](https://huggingface.co/Finnish-NLP/gpt2-large-finnish) available which performs better compared to this model.
## Model description
Finnish GPT-2 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation:
```python
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='Finnish-NLP/gpt2-medium-finnish')
>>> generator("Tekstiä tuottava tekoäly on", max_length=30, num_return_sequences=5)
[{'generated_text': 'Tekstiä tuottava tekoäly on tullut ihmisten arkeen viime vuosina. Se auttaa hahmottamaan ja tulkitsemaan monimutkaisia kokonaisuuksia ja ilmiöitä, joita ihmiset tekevät esimerkiksi ruokakaupassa'},
{'generated_text': 'Tekstiä tuottava tekoäly on jo ottanut haltuunsa myös ihmisten käyttämiä sovelluksia ja esimerkiksi pankkipalveluita. Sen vuoksi tekoäly on tärkeä kumppani etenkin yritysten liiketoiminnan kehittämisessä.-'},
{'generated_text': 'Tekstiä tuottava tekoäly on tekoälylle luonnollinen valinta, sillä sen avulla voi kommunikoida ihmisten kanssa hyvin pitkälle samalla tavalla kuin tietokoneiden kanssa. Se on kehittynyt muun'},
{'generated_text': 'Tekstiä tuottava tekoäly on ihmisen kehittämä tekoäly, jota ei vielä ole pystytty rakentamaan. Tekoäly kykenee toimimaan esimerkiksi matemaattisissa, tilastollisissa ja sosiaalisissa'},
{'generated_text': 'Tekstiä tuottava tekoäly on jo niin iso juttu ettei sitä kannata rajoittaakaan. Ja jos se saadaan käyttöön, niin se voi jo pian syrjäyttää perinteisen'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('Finnish-NLP/gpt2-medium-finnish')
model = GPT2Model.from_pretrained('Finnish-NLP/gpt2-medium-finnish')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('Finnish-NLP/gpt2-medium-finnish')
model = TFGPT2Model.from_pretrained('Finnish-NLP/gpt2-medium-finnish', from_pt=True)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
As with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Training data
This Finnish GPT-2 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 360k steps (a bit over 1 epoch, 128 batch size). The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 4000 steps and cosine decay of the learning rate after.
## Evaluation results
Evaluation was done using the *validation* split of the [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned) dataset with [Perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) performs better than our smaller [gpt2-finnish](https://huggingface.co/Finnish-NLP/gpt2-finnish) model variant but loses to our bigger [gpt2-large-finnish](https://huggingface.co/Finnish-NLP/gpt2-large-finnish) model.
| | Perplexity |
|------------------------------------------|------------|
|Finnish-NLP/gpt2-medium-finnish |34.08 |
|Finnish-NLP/gpt2-finnish |44.19 |
|Finnish-NLP/gpt2-large-finnish |**30.74** |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 | {"language": ["fi"], "license": "apache-2.0", "tags": ["finnish", "gpt2"], "datasets": ["Finnish-NLP/mc4_fi_cleaned", "wikipedia"], "widget": [{"text": "Teksti\u00e4 tuottava teko\u00e4ly on"}]} | Finnish-NLP/gpt2-medium-finnish | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"finnish",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fi"
] | TAGS
#transformers #pytorch #jax #tensorboard #gpt2 #text-generation #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| GPT-2 medium for Finnish
========================
Pretrained GPT-2 medium model on Finnish language using a causal language modeling (CLM) objective. GPT-2 was introduced in
this paper
and first released at this page.
Note: this model is 345M parameter variant as in Huggingface's GPT-2-medium config, so not the famous big 1.5B parameter variant by OpenAI. We also have bigger 774M parameter variant gpt2-large-finnish available which performs better compared to this model.
Model description
-----------------
Finnish GPT-2 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
Intended uses & limitations
---------------------------
You can use the raw model for text generation or fine-tune it to a downstream task. See the
model hub to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
As with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
Training data
-------------
This Finnish GPT-2 model was pretrained on the combination of six datasets:
* mc4\_fi\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset
* Yle Finnish News Archive 2011-2018
* Yle Finnish News Archive 2019-2020
* Finnish News Agency Archive (STT)
* The Suomi24 Sentences Corpus
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
Training procedure
------------------
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 360k steps (a bit over 1 epoch, 128 batch size). The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 4000 steps and cosine decay of the learning rate after.
Evaluation results
------------------
Evaluation was done using the *validation* split of the mc4\_fi\_cleaned dataset with Perplexity (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) performs better than our smaller gpt2-finnish model variant but loses to our bigger gpt2-large-finnish model.
Acknowledgements
----------------
This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud.
Team Members
------------
* Aapo Tanskanen, Hugging Face profile, LinkedIn profile
* Rasmus Toivanen, Hugging Face profile, LinkedIn profile
Feel free to contact us for more details
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.\n\n\nAs with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n\n\nTraining data\n-------------\n\n\nThis Finnish GPT-2 model was pretrained on the combination of six datasets:\n\n\n* mc4\\_fi\\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).\n* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n* Yle Finnish News Archive 2011-2018\n* Yle Finnish News Archive 2019-2020\n* Finnish News Agency Archive (STT)\n* The Suomi24 Sentences Corpus\n\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens.",
"### Pretraining\n\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 360k steps (a bit over 1 epoch, 128 batch size). The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 4000 steps and cosine decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nEvaluation was done using the *validation* split of the mc4\\_fi\\_cleaned dataset with Perplexity (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) performs better than our smaller gpt2-finnish model variant but loses to our bigger gpt2-large-finnish model.\n\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.\n\n\nTeam Members\n------------\n\n\n* Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n* Rasmus Toivanen, Hugging Face profile, LinkedIn profile\n\n\nFeel free to contact us for more details"
] | [
"TAGS\n#transformers #pytorch #jax #tensorboard #gpt2 #text-generation #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.\n\n\nAs with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n\n\nTraining data\n-------------\n\n\nThis Finnish GPT-2 model was pretrained on the combination of six datasets:\n\n\n* mc4\\_fi\\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).\n* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n* Yle Finnish News Archive 2011-2018\n* Yle Finnish News Archive 2019-2020\n* Finnish News Agency Archive (STT)\n* The Suomi24 Sentences Corpus\n\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens.",
"### Pretraining\n\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 360k steps (a bit over 1 epoch, 128 batch size). The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 4000 steps and cosine decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nEvaluation was done using the *validation* split of the mc4\\_fi\\_cleaned dataset with Perplexity (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) performs better than our smaller gpt2-finnish model variant but loses to our bigger gpt2-large-finnish model.\n\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.\n\n\nTeam Members\n------------\n\n\n* Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n* Rasmus Toivanen, Hugging Face profile, LinkedIn profile\n\n\nFeel free to contact us for more details"
] |
fill-mask | transformers |
# RoBERTa large model for Finnish
This **Finnish-NLP/roberta-large-finnish-v2** model is a new version of the previously trained [Finnish-NLP/roberta-large-finnish](https://huggingface.co/Finnish-NLP/roberta-large-finnish) model. Training hyperparameters were same but the training dataset was cleaned better with the goal to get better performing language model through the better cleaned data. Based on the model evaluations (check the table at the end), slightly better cleaned data didn't seem to produce better performing model.
Pretrained RoBERTa model on Finnish language using a masked language modeling (MLM) objective. RoBERTa was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it
makes a difference between finnish and Finnish.
## Model description
Finnish RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the RoBERTa model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Finnish-NLP/roberta-large-finnish-v2')
>>> unmasker("Moikka olen <mask> kielimalli.")
[{'score': 0.04741518571972847,
'token': 763,
'token_str': ' hyvä',
'sequence': 'Moikka olen hyvä kielimalli.'},
{'score': 0.036977022886276245,
'token': 505,
'token_str': ' myös',
'sequence': 'Moikka olen myös kielimalli.'},
{'score': 0.025283709168434143,
'token': 3089,
'token_str': ' huono',
'sequence': 'Moikka olen huono kielimalli.'},
{'score': 0.022848006337881088,
'token': 1852,
'token_str': ' toinen',
'sequence': 'Moikka olen toinen kielimalli.'},
{'score': 0.019232941791415215,
'token': 1029,
'token_str': ' siis',
'sequence': 'Moikka olen siis kielimalli.'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('Finnish-NLP/roberta-large-finnish-v2')
model = RobertaModel.from_pretrained('Finnish-NLP/roberta-large-finnish-v2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('Finnish-NLP/roberta-large-finnish-v2')
model = TFRobertaModel.from_pretrained('Finnish-NLP/roberta-large-finnish-v2', from_pt=True)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions.
## Training data
This Finnish RoBERTa model was pretrained on the combination of five datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with `<s>` and the end of one by `</s>`
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 520k train steps (2 epochs, batch size 512) with a sequence length of 128 and continuing for 520k steps (1 epoch, batch size 64) with a sequence length of 512. The optimizer used for the 128 sequence training was AdamW, and for the 512 sequence training it was Adafactor (to save memory). Learning rate was 2e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and \\(\epsilon = 1e-6\\), learning rate warmup for 1500 steps and linear decay of the learning rate after.
## Evaluation results
Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model and to our previous [Finnish-NLP/roberta-large-finnish](https://huggingface.co/Finnish-NLP/roberta-large-finnish) model:
| | Average | Yle News 128 length | Yle News 512 length | Eduskunta 128 length |
|----------------------------------------|----------|---------------------|---------------------|----------------------|
|Finnish-NLP/roberta-large-finnish-v2 |88.17 |94.46 |95.22 |74.83 |
|Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 |
|TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |**94.90** |**95.49** |**76.07** |
To conclude, this model didn't significantly improve compared to our previous [Finnish-NLP/roberta-large-finnish](https://huggingface.co/Finnish-NLP/roberta-large-finnish) model. This model is also slightly (~ 1%) losing to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 | {"language": ["fi"], "license": "apache-2.0", "tags": ["finnish", "roberta"], "datasets": ["Finnish-NLP/mc4_fi_cleaned", "wikipedia"], "widget": [{"text": "Moikka olen <mask> kielimalli."}]} | Finnish-NLP/roberta-large-finnish-v2 | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"finnish",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:1907.11692",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1907.11692"
] | [
"fi"
] | TAGS
#transformers #pytorch #jax #tensorboard #roberta #fill-mask #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #arxiv-1907.11692 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| RoBERTa large model for Finnish
===============================
This Finnish-NLP/roberta-large-finnish-v2 model is a new version of the previously trained Finnish-NLP/roberta-large-finnish model. Training hyperparameters were same but the training dataset was cleaned better with the goal to get better performing language model through the better cleaned data. Based on the model evaluations (check the table at the end), slightly better cleaned data didn't seem to produce better performing model.
Pretrained RoBERTa model on Finnish language using a masked language modeling (MLM) objective. RoBERTa was introduced in
this paper and first released in
this repository. This model is case-sensitive: it
makes a difference between finnish and Finnish.
Model description
-----------------
Finnish RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the RoBERTa model as inputs.
Intended uses & limitations
---------------------------
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions.
Training data
-------------
This Finnish RoBERTa model was pretrained on the combination of five datasets:
* mc4\_fi\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset
* Yle Finnish News Archive
* Finnish News Agency Archive (STT)
* The Suomi24 Sentences Corpus
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
Training procedure
------------------
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with '~~' and the end of one by '~~'
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by ''.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 520k train steps (2 epochs, batch size 512) with a sequence length of 128 and continuing for 520k steps (1 epoch, batch size 64) with a sequence length of 512. The optimizer used for the 128 sequence training was AdamW, and for the 512 sequence training it was Adafactor (to save memory). Learning rate was 2e-4, \(\beta\_{1} = 0.9\), \(\beta\_{2} = 0.98\) and \(\epsilon = 1e-6\), learning rate warmup for 1500 steps and linear decay of the learning rate after.
Evaluation results
------------------
Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our previous Finnish-NLP/roberta-large-finnish model:
To conclude, this model didn't significantly improve compared to our previous Finnish-NLP/roberta-large-finnish model. This model is also slightly (~ 1%) losing to the FinBERT (Finnish BERT) model.
Acknowledgements
----------------
This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud.
Team Members
------------
* Aapo Tanskanen, Hugging Face profile, LinkedIn profile
* Rasmus Toivanen Hugging Face profile, LinkedIn profile
Feel free to contact us for more details
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from\nneutral. Therefore, the model can have biased predictions.\n\n\nTraining data\n-------------\n\n\nThis Finnish RoBERTa model was pretrained on the combination of five datasets:\n\n\n* mc4\\_fi\\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).\n* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n* Yle Finnish News Archive\n* Finnish News Agency Archive (STT)\n* The Suomi24 Sentences Corpus\n\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of\nthe model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked\nwith '~~' and the end of one by '~~'\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by ''.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nContrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).",
"### Pretraining\n\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 520k train steps (2 epochs, batch size 512) with a sequence length of 128 and continuing for 520k steps (1 epoch, batch size 64) with a sequence length of 512. The optimizer used for the 128 sequence training was AdamW, and for the 512 sequence training it was Adafactor (to save memory). Learning rate was 2e-4, \\(\\beta\\_{1} = 0.9\\), \\(\\beta\\_{2} = 0.98\\) and \\(\\epsilon = 1e-6\\), learning rate warmup for 1500 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nEvaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.\nWhen fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our previous Finnish-NLP/roberta-large-finnish model:\n\n\n\nTo conclude, this model didn't significantly improve compared to our previous Finnish-NLP/roberta-large-finnish model. This model is also slightly (~ 1%) losing to the FinBERT (Finnish BERT) model.\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.\n\n\nTeam Members\n------------\n\n\n* Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n* Rasmus Toivanen Hugging Face profile, LinkedIn profile\n\n\nFeel free to contact us for more details"
] | [
"TAGS\n#transformers #pytorch #jax #tensorboard #roberta #fill-mask #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #arxiv-1907.11692 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from\nneutral. Therefore, the model can have biased predictions.\n\n\nTraining data\n-------------\n\n\nThis Finnish RoBERTa model was pretrained on the combination of five datasets:\n\n\n* mc4\\_fi\\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).\n* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n* Yle Finnish News Archive\n* Finnish News Agency Archive (STT)\n* The Suomi24 Sentences Corpus\n\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of\nthe model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked\nwith '~~' and the end of one by '~~'\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by ''.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nContrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).",
"### Pretraining\n\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 520k train steps (2 epochs, batch size 512) with a sequence length of 128 and continuing for 520k steps (1 epoch, batch size 64) with a sequence length of 512. The optimizer used for the 128 sequence training was AdamW, and for the 512 sequence training it was Adafactor (to save memory). Learning rate was 2e-4, \\(\\beta\\_{1} = 0.9\\), \\(\\beta\\_{2} = 0.98\\) and \\(\\epsilon = 1e-6\\), learning rate warmup for 1500 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nEvaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.\nWhen fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our previous Finnish-NLP/roberta-large-finnish model:\n\n\n\nTo conclude, this model didn't significantly improve compared to our previous Finnish-NLP/roberta-large-finnish model. This model is also slightly (~ 1%) losing to the FinBERT (Finnish BERT) model.\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.\n\n\nTeam Members\n------------\n\n\n* Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n* Rasmus Toivanen Hugging Face profile, LinkedIn profile\n\n\nFeel free to contact us for more details"
] |
fill-mask | transformers |
# RoBERTa large model for Finnish
Pretrained RoBERTa model on Finnish language using a masked language modeling (MLM) objective. RoBERTa was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it
makes a difference between finnish and Finnish.
## Model description
Finnish RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the RoBERTa model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Finnish-NLP/roberta-large-finnish')
>>> unmasker("Moikka olen <mask> kielimalli.")
[{'sequence': 'Moikka olen hyvä kielimalli.',
'score': 0.1535797119140625,
'token': 767,
'token_str': ' hyvä'},
{'sequence': 'Moikka olen paras kielimalli.',
'score': 0.04795042425394058,
'token': 2888,
'token_str': ' paras'},
{'sequence': 'Moikka olen huono kielimalli.',
'score': 0.04251479730010033,
'token': 3217,
'token_str': ' huono'},
{'sequence': 'Moikka olen myös kielimalli.',
'score': 0.027469098567962646,
'token': 520,
'token_str': ' myös'},
{'sequence': 'Moikka olen se kielimalli.',
'score': 0.013878575526177883,
'token': 358,
'token_str': ' se'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('Finnish-NLP/roberta-large-finnish')
model = RobertaModel.from_pretrained('Finnish-NLP/roberta-large-finnish')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('Finnish-NLP/roberta-large-finnish')
model = TFRobertaModel.from_pretrained('Finnish-NLP/roberta-large-finnish', from_pt=True)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions.
## Training data
This Finnish RoBERTa model was pretrained on the combination of five datasets:
- [mc4](https://huggingface.co/datasets/mc4), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 78GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with `<s>` and the end of one by `</s>`
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 2 epochs with a sequence length of 128 and continuing for one more epoch with a sequence length of 512. The optimizer used is Adafactor with a learning rate of 2e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and \\(\epsilon = 1e-6\\), learning rate warmup for 1500 steps and linear decay of the learning rate after.
## Evaluation results
Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) and to our previous [Finnish RoBERTa-large](https://huggingface.co/flax-community/RoBERTa-large-finnish) trained during the Hugging Face JAX/Flax community week:
| | Average | Yle News 128 length | Yle News 512 length | Eduskunta 128 length |
|----------------------------------------|----------|---------------------|---------------------|----------------------|
|Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 |
|TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |**94.90** |**95.49** |**76.07** |
|flax-community/RoBERTa-large-finnish |87.72 |94.42 |95.06 |73.67 |
To conclude, this model improves on our previous [Finnish RoBERTa-large](https://huggingface.co/flax-community/RoBERTa-large-finnish) model trained during the Hugging Face JAX/Flax community week but is still slightly (~ 1%) losing to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
- Tommi Vehviläinen [Hugging Face profile](https://huggingface.co/Tommi)
Feel free to contact us for more details 🤗 | {"language": ["fi"], "license": "apache-2.0", "tags": ["finnish", "roberta"], "datasets": ["Finnish-NLP/mc4_fi_cleaned", "wikipedia"], "widget": [{"text": "Moikka olen <mask> kielimalli."}]} | Finnish-NLP/roberta-large-finnish | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"finnish",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:1907.11692",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1907.11692"
] | [
"fi"
] | TAGS
#transformers #pytorch #jax #tensorboard #roberta #fill-mask #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #arxiv-1907.11692 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| RoBERTa large model for Finnish
===============================
Pretrained RoBERTa model on Finnish language using a masked language modeling (MLM) objective. RoBERTa was introduced in
this paper and first released in
this repository. This model is case-sensitive: it
makes a difference between finnish and Finnish.
Model description
-----------------
Finnish RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the RoBERTa model as inputs.
Intended uses & limitations
---------------------------
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions.
Training data
-------------
This Finnish RoBERTa model was pretrained on the combination of five datasets:
* mc4, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset
* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset
* Yle Finnish News Archive
* Finnish News Agency Archive (STT)
* The Suomi24 Sentences Corpus
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 78GB of text.
Training procedure
------------------
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with '~~' and the end of one by '~~'
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by ''.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 2 epochs with a sequence length of 128 and continuing for one more epoch with a sequence length of 512. The optimizer used is Adafactor with a learning rate of 2e-4, \(\beta\_{1} = 0.9\), \(\beta\_{2} = 0.98\) and \(\epsilon = 1e-6\), learning rate warmup for 1500 steps and linear decay of the learning rate after.
Evaluation results
------------------
Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) and to our previous Finnish RoBERTa-large trained during the Hugging Face JAX/Flax community week:
To conclude, this model improves on our previous Finnish RoBERTa-large model trained during the Hugging Face JAX/Flax community week but is still slightly (~ 1%) losing to the FinBERT (Finnish BERT) model.
Acknowledgements
----------------
This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud.
Team Members
------------
* Aapo Tanskanen, Hugging Face profile, LinkedIn profile
* Rasmus Toivanen Hugging Face profile, LinkedIn profile
* Tommi Vehviläinen Hugging Face profile
Feel free to contact us for more details
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from\nneutral. Therefore, the model can have biased predictions.\n\n\nTraining data\n-------------\n\n\nThis Finnish RoBERTa model was pretrained on the combination of five datasets:\n\n\n* mc4, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset\n* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n* Yle Finnish News Archive\n* Finnish News Agency Archive (STT)\n* The Suomi24 Sentences Corpus\n\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 78GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of\nthe model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked\nwith '~~' and the end of one by '~~'\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by ''.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nContrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).",
"### Pretraining\n\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 2 epochs with a sequence length of 128 and continuing for one more epoch with a sequence length of 512. The optimizer used is Adafactor with a learning rate of 2e-4, \\(\\beta\\_{1} = 0.9\\), \\(\\beta\\_{2} = 0.98\\) and \\(\\epsilon = 1e-6\\), learning rate warmup for 1500 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nEvaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.\nWhen fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) and to our previous Finnish RoBERTa-large trained during the Hugging Face JAX/Flax community week:\n\n\n\nTo conclude, this model improves on our previous Finnish RoBERTa-large model trained during the Hugging Face JAX/Flax community week but is still slightly (~ 1%) losing to the FinBERT (Finnish BERT) model.\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.\n\n\nTeam Members\n------------\n\n\n* Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n* Rasmus Toivanen Hugging Face profile, LinkedIn profile\n* Tommi Vehviläinen Hugging Face profile\n\n\nFeel free to contact us for more details"
] | [
"TAGS\n#transformers #pytorch #jax #tensorboard #roberta #fill-mask #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #arxiv-1907.11692 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from\nneutral. Therefore, the model can have biased predictions.\n\n\nTraining data\n-------------\n\n\nThis Finnish RoBERTa model was pretrained on the combination of five datasets:\n\n\n* mc4, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset\n* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n* Yle Finnish News Archive\n* Finnish News Agency Archive (STT)\n* The Suomi24 Sentences Corpus\n\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 78GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of\nthe model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked\nwith '~~' and the end of one by '~~'\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by ''.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nContrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).",
"### Pretraining\n\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 2 epochs with a sequence length of 128 and continuing for one more epoch with a sequence length of 512. The optimizer used is Adafactor with a learning rate of 2e-4, \\(\\beta\\_{1} = 0.9\\), \\(\\beta\\_{2} = 0.98\\) and \\(\\epsilon = 1e-6\\), learning rate warmup for 1500 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nEvaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.\nWhen fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) and to our previous Finnish RoBERTa-large trained during the Hugging Face JAX/Flax community week:\n\n\n\nTo conclude, this model improves on our previous Finnish RoBERTa-large model trained during the Hugging Face JAX/Flax community week but is still slightly (~ 1%) losing to the FinBERT (Finnish BERT) model.\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.\n\n\nTeam Members\n------------\n\n\n* Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n* Rasmus Toivanen Hugging Face profile, LinkedIn profile\n* Tommi Vehviläinen Hugging Face profile\n\n\nFeel free to contact us for more details"
] |
fill-mask | transformers |
# RoBERTa large model trained with WECHSEL method for Finnish
Pretrained RoBERTa model on Finnish language using a masked language modeling (MLM) objective with WECHSEL method. RoBERTa was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta).
WECHSEL method (Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models) was introduced in [this paper](https://arxiv.org/abs/2112.06598) and first released in [this repository](https://github.com/CPJKU/wechsel).
This model is case-sensitive: it makes a difference between finnish and Finnish.
## Model description
Finnish RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the RoBERTa model as inputs.
## WECHSEL method
Using the WECHSEL method, we first took the pretrained English [roberta-large](https://huggingface.co/roberta-large) model, changed its tokenizer with our Finnish tokenizer and initialized model's token embeddings such that they are close to semantically similar English tokens by utilizing multilingual static word embeddings (by fastText) covering English and Finnish. We were able to confirm the WECHSEL paper's findings that using this method you can save pretraining time and thus computing resources. To get idea of the WECHSEL method's training time savings you can check the table below illustrating the MLM evaluation accuracies during the pretraining compared to the [Finnish-NLP/roberta-large-finnish-v2](https://huggingface.co/Finnish-NLP/roberta-large-finnish-v2) which was trained from scratch:
| | 10k train steps | 100k train steps | 200k train steps | 270k train steps |
|------------------------------------------|------------------|------------------|------------------|------------------|
|Finnish-NLP/roberta-large-wechsel-finnish |37.61 eval acc |58.14 eval acc |61.60 eval acc |62.77 eval acc |
|Finnish-NLP/roberta-large-finnish-v2 |13.83 eval acc |55.87 eval acc |58.58 eval acc |59.47 eval acc |
Downstream finetuning text classification tests can be found from the end but there this model trained with WECHSEL method didn't significantly improve the downstream performances. However, based on tens of qualitative fill-mask task example tests we noticed that for fill-mask task this WECHSEL model significantly outperforms our other models trained from scratch.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Finnish-NLP/roberta-large-wechsel-finnish')
>>> unmasker("Moikka olen <mask> kielimalli.")
[{'sequence': 'Moikka olen hyvä kielimalli.',
'score': 0.07757357507944107,
'token': 763,
'token_str': ' hyvä'},
{'sequence': 'Moikka olen suomen kielimalli.',
'score': 0.05297883599996567,
'token': 3641,
'token_str': ' suomen'},
{'sequence': 'Moikka olen kuin kielimalli.',
'score': 0.03747279942035675,
'token': 523,
'token_str': ' kuin'},
{'sequence': 'Moikka olen suomalainen kielimalli.',
'score': 0.031031042337417603,
'token': 4966,
'token_str': ' suomalainen'},
{'sequence': 'Moikka olen myös kielimalli.',
'score': 0.026489052921533585,
'token': 505,
'token_str': ' myös'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('Finnish-NLP/roberta-large-wechsel-finnish')
model = RobertaModel.from_pretrained('Finnish-NLP/roberta-large-wechsel-finnish')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('Finnish-NLP/roberta-large-wechsel-finnish')
model = TFRobertaModel.from_pretrained('Finnish-NLP/roberta-large-wechsel-finnish', from_pt=True)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions.
## Training data
This Finnish RoBERTa model was pretrained on the combination of five datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with `<s>` and the end of one by `</s>`
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 270k steps (a bit over 1 epoch, 512 batch size) with a sequence length of 128 and continuing for 180k steps (batch size 64) with a sequence length of 512. The optimizer used was Adafactor (to save memory). Learning rate was 2e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and \\(\epsilon = 1e-6\\), learning rate warmup for 2500 steps and linear decay of the learning rate after.
## Evaluation results
Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model and to our previous [Finnish-NLP/roberta-large-finnish-v2](https://huggingface.co/Finnish-NLP/roberta-large-finnish-v2) and [Finnish-NLP/roberta-large-finnish](https://huggingface.co/Finnish-NLP/roberta-large-finnish) models:
| | Average | Yle News 128 length | Yle News 512 length | Eduskunta 128 length |
|------------------------------------------|----------|---------------------|---------------------|----------------------|
|Finnish-NLP/roberta-large-wechsel-finnish |88.19 |**94.91** |95.18 |74.47 |
|Finnish-NLP/roberta-large-finnish-v2 |88.17 |94.46 |95.22 |74.83 |
|Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 |
|TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |94.90 |**95.49** |**76.07** |
To conclude, this model didn't significantly improve compared to our previous models which were trained from scratch instead of using the WECHSEL method as in this model. This model is also slightly (~ 1%) losing to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 | {"language": ["fi"], "license": "apache-2.0", "tags": ["finnish", "roberta"], "datasets": ["Finnish-NLP/mc4_fi_cleaned", "wikipedia"], "widget": [{"text": "Moikka olen <mask> kielimalli."}]} | Finnish-NLP/roberta-large-wechsel-finnish | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"finnish",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:1907.11692",
"arxiv:2112.06598",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1907.11692",
"2112.06598"
] | [
"fi"
] | TAGS
#transformers #pytorch #jax #tensorboard #roberta #fill-mask #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #arxiv-1907.11692 #arxiv-2112.06598 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| RoBERTa large model trained with WECHSEL method for Finnish
===========================================================
Pretrained RoBERTa model on Finnish language using a masked language modeling (MLM) objective with WECHSEL method. RoBERTa was introduced in
this paper and first released in
this repository.
WECHSEL method (Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models) was introduced in this paper and first released in this repository.
This model is case-sensitive: it makes a difference between finnish and Finnish.
Model description
-----------------
Finnish RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the RoBERTa model as inputs.
WECHSEL method
--------------
Using the WECHSEL method, we first took the pretrained English roberta-large model, changed its tokenizer with our Finnish tokenizer and initialized model's token embeddings such that they are close to semantically similar English tokens by utilizing multilingual static word embeddings (by fastText) covering English and Finnish. We were able to confirm the WECHSEL paper's findings that using this method you can save pretraining time and thus computing resources. To get idea of the WECHSEL method's training time savings you can check the table below illustrating the MLM evaluation accuracies during the pretraining compared to the Finnish-NLP/roberta-large-finnish-v2 which was trained from scratch:
Downstream finetuning text classification tests can be found from the end but there this model trained with WECHSEL method didn't significantly improve the downstream performances. However, based on tens of qualitative fill-mask task example tests we noticed that for fill-mask task this WECHSEL model significantly outperforms our other models trained from scratch.
Intended uses & limitations
---------------------------
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions.
Training data
-------------
This Finnish RoBERTa model was pretrained on the combination of five datasets:
* mc4\_fi\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset
* Yle Finnish News Archive
* Finnish News Agency Archive (STT)
* The Suomi24 Sentences Corpus
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
Training procedure
------------------
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with '~~' and the end of one by '~~'
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by ''.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 270k steps (a bit over 1 epoch, 512 batch size) with a sequence length of 128 and continuing for 180k steps (batch size 64) with a sequence length of 512. The optimizer used was Adafactor (to save memory). Learning rate was 2e-4, \(\beta\_{1} = 0.9\), \(\beta\_{2} = 0.98\) and \(\epsilon = 1e-6\), learning rate warmup for 2500 steps and linear decay of the learning rate after.
Evaluation results
------------------
Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our previous Finnish-NLP/roberta-large-finnish-v2 and Finnish-NLP/roberta-large-finnish models:
To conclude, this model didn't significantly improve compared to our previous models which were trained from scratch instead of using the WECHSEL method as in this model. This model is also slightly (~ 1%) losing to the FinBERT (Finnish BERT) model.
Acknowledgements
----------------
This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud.
Team Members
------------
* Aapo Tanskanen, Hugging Face profile, LinkedIn profile
* Rasmus Toivanen Hugging Face profile, LinkedIn profile
Feel free to contact us for more details
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from\nneutral. Therefore, the model can have biased predictions.\n\n\nTraining data\n-------------\n\n\nThis Finnish RoBERTa model was pretrained on the combination of five datasets:\n\n\n* mc4\\_fi\\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).\n* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n* Yle Finnish News Archive\n* Finnish News Agency Archive (STT)\n* The Suomi24 Sentences Corpus\n\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of\nthe model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked\nwith '~~' and the end of one by '~~'\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by ''.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nContrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).",
"### Pretraining\n\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 270k steps (a bit over 1 epoch, 512 batch size) with a sequence length of 128 and continuing for 180k steps (batch size 64) with a sequence length of 512. The optimizer used was Adafactor (to save memory). Learning rate was 2e-4, \\(\\beta\\_{1} = 0.9\\), \\(\\beta\\_{2} = 0.98\\) and \\(\\epsilon = 1e-6\\), learning rate warmup for 2500 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nEvaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.\nWhen fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our previous Finnish-NLP/roberta-large-finnish-v2 and Finnish-NLP/roberta-large-finnish models:\n\n\n\nTo conclude, this model didn't significantly improve compared to our previous models which were trained from scratch instead of using the WECHSEL method as in this model. This model is also slightly (~ 1%) losing to the FinBERT (Finnish BERT) model.\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.\n\n\nTeam Members\n------------\n\n\n* Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n* Rasmus Toivanen Hugging Face profile, LinkedIn profile\n\n\nFeel free to contact us for more details"
] | [
"TAGS\n#transformers #pytorch #jax #tensorboard #roberta #fill-mask #finnish #fi #dataset-Finnish-NLP/mc4_fi_cleaned #dataset-wikipedia #arxiv-1907.11692 #arxiv-2112.06598 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model contains a lot of unfiltered content from the internet, which is far from\nneutral. Therefore, the model can have biased predictions.\n\n\nTraining data\n-------------\n\n\nThis Finnish RoBERTa model was pretrained on the combination of five datasets:\n\n\n* mc4\\_fi\\_cleaned, the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).\n* wikipedia We used the Finnish subset of the wikipedia (August 2021) dataset\n* Yle Finnish News Archive\n* Finnish News Agency Archive (STT)\n* The Suomi24 Sentences Corpus\n\n\nRaw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of\nthe model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked\nwith '~~' and the end of one by '~~'\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by ''.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nContrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).",
"### Pretraining\n\n\nThe model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 270k steps (a bit over 1 epoch, 512 batch size) with a sequence length of 128 and continuing for 180k steps (batch size 64) with a sequence length of 512. The optimizer used was Adafactor (to save memory). Learning rate was 2e-4, \\(\\beta\\_{1} = 0.9\\), \\(\\beta\\_{2} = 0.98\\) and \\(\\epsilon = 1e-6\\), learning rate warmup for 2500 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nEvaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.\nWhen fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our previous Finnish-NLP/roberta-large-finnish-v2 and Finnish-NLP/roberta-large-finnish models:\n\n\n\nTo conclude, this model didn't significantly improve compared to our previous models which were trained from scratch instead of using the WECHSEL method as in this model. This model is also slightly (~ 1%) losing to the FinBERT (Finnish BERT) model.\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud.\n\n\nTeam Members\n------------\n\n\n* Aapo Tanskanen, Hugging Face profile, LinkedIn profile\n* Rasmus Toivanen Hugging Face profile, LinkedIn profile\n\n\nFeel free to contact us for more details"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-squad
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8584 | 1.0 | 5540 | 0.9056 |
| 0.6473 | 2.0 | 11080 | 0.8975 |
| 0.4801 | 3.0 | 16620 | 0.9901 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "albert-base-v2-finetuned-squad", "results": []}]} | Firat/albert-base-v2-finetuned-squad | null | [
"transformers",
"pytorch",
"albert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #albert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
| albert-base-v2-finetuned-squad
==============================
This model is a fine-tuned version of albert-base-v2 on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9901
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1
* Datasets 1.17.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #albert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2856 | 1.0 | 2767 | 1.1919 |
| 1.012 | 2.0 | 5534 | 1.1332 |
| 0.8512 | 3.0 | 8301 | 1.1460 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.18.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]} | Firat/distilbert-base-uncased-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-squad
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1460
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1
* Datasets 1.18.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8926 | 1.0 | 5536 | 0.8694 |
| 0.6821 | 2.0 | 11072 | 0.8428 |
| 0.5335 | 3.0 | 16608 | 0.8953 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "roberta-base-finetuned-squad", "results": []}]} | Firat/roberta-base-finetuned-squad | null | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #question-answering #generated_from_trainer #dataset-squad #license-mit #endpoints_compatible #region-us
| roberta-base-finetuned-squad
============================
This model is a fine-tuned version of roberta-base on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8953
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1
* Datasets 1.17.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #roberta #question-answering #generated_from_trainer #dataset-squad #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-guarani-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2392
- Wer: 1.0743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 18.2131 | 49.94 | 400 | 3.2901 | 1.0 |
| 2.0496 | 99.94 | 800 | 3.2392 | 1.0743 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-guarani-colab", "results": []}]} | FitoDS/wav2vec2-large-xls-r-300m-guarani-colab | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-guarani-colab
=======================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 3.2392
* Wer: 1.0743
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 100
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.17.1.dev0
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 133.5167
- Wer: 18.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
| {"language": ["ab"], "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]} | FitoDS/xls-r-ab-test | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ab"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #ab #dataset-common_voice #endpoints_compatible #region-us
|
#
This model is a fine-tuned version of hf-test/xls-r-dummy on the COMMON_VOICE - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 133.5167
- Wer: 18.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
| [
"# \n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the COMMON_VOICE - AB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 133.5167\n- Wer: 18.9286",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 2.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #ab #dataset-common_voice #endpoints_compatible #region-us \n",
"# \n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the COMMON_VOICE - AB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 133.5167\n- Wer: 18.9286",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 2.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.11.0"
] |
text-generation | transformers |
# Sheldon Cooper from The Big Bang Theory Show DialoGPT Model | {"tags": ["conversational"]} | Flampt/DialoGPT-medium-Sheldon | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Sheldon Cooper from The Big Bang Theory Show DialoGPT Model | [
"# Sheldon Cooper from The Big Bang Theory Show DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Sheldon Cooper from The Big Bang Theory Show DialoGPT Model"
] |
text-generation | transformers | #
| {"tags": ["conversational"]} | For/sheldonbot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| #
| [
"#"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"#"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fa-QA-v1
Persian Question and answer Model Based on Bert Model
This model is a fine-tuned version of [ParsBERT](https://arxiv.org/abs/2005.12515) on PersianQA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2563 | 1.0 | 1126 | 1.7222 |
| 1.3372 | 2.0 | 2252 | 1.7297 |
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model_index": [{"name": "bert-fa-QA-v1", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}}]}]} | ForutanRad/bert-fa-QA-v1 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"arxiv:2005.12515",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2005.12515"
] | [] | TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #arxiv-2005.12515 #license-apache-2.0 #endpoints_compatible #region-us
| bert-fa-QA-v1
=============
Persian Question and answer Model Based on Bert Model
This model is a fine-tuned version of ParsBERT on PersianQA dataset.
It achieves the following results on the evaluation set:
* Loss: 1.7297
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.9.0
* Pytorch 1.9.0+cu102
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.0\n* Pytorch 1.9.0+cu102\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #arxiv-2005.12515 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.0\n* Pytorch 1.9.0+cu102\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
# Chat Bot Test | {"tags": ["conversational"]} | FosterPatch/GoT-test | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Chat Bot Test | [
"# Chat Bot Test"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Chat Bot Test"
] |
null | null | # Program Synthesis Data
Generated program synthesis datasets used to train [dreamcoder](https://github.com/ellisk42/ec).
Currently just supports text & list data.
```python
_FEATURES = datasets.Features(
{
"description": datasets.Value("string"),
"input": datasets.Value("string"),
"output": datasets.Value("string"),
"types": datasets.Value("string")
}
)
```
![](https://huggingface.co/Fraser/program-synthesis/resolve/main/img.png)
| {"language": ["en"], "license": "mit", "tags": ["program-synthesis"], "datasets": ["program-synthesis"], "thumbnail": "https://huggingface.co/Fraser/program-synthesis/resolve/main/img.png"} | Fraser/to_delete | null | [
"program-synthesis",
"en",
"dataset:program-synthesis",
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#program-synthesis #en #dataset-program-synthesis #license-mit #region-us
| # Program Synthesis Data
Generated program synthesis datasets used to train dreamcoder.
Currently just supports text & list data.
![](URL
| [
"# Program Synthesis Data\n\nGenerated program synthesis datasets used to train dreamcoder.\n\nCurrently just supports text & list data.\n\n\n\n![](URL"
] | [
"TAGS\n#program-synthesis #en #dataset-program-synthesis #license-mit #region-us \n",
"# Program Synthesis Data\n\nGenerated program synthesis datasets used to train dreamcoder.\n\nCurrently just supports text & list data.\n\n\n\n![](URL"
] |
null | null | # Transformer-VAE (WIP)
A PyTorch Transformer-VAE model.
Uses an MMD loss to prevent posterior collapse.
Will setup in the next month or so.
## ToDo
- [ ] Copy in old repo code.
- [ ] Make a bunch of sample training runs.
- [ ] Make an interpolation widget? | {} | Fraser/transformer-vae | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| # Transformer-VAE (WIP)
A PyTorch Transformer-VAE model.
Uses an MMD loss to prevent posterior collapse.
Will setup in the next month or so.
## ToDo
- [ ] Copy in old repo code.
- [ ] Make a bunch of sample training runs.
- [ ] Make an interpolation widget? | [
"# Transformer-VAE (WIP)\n\nA PyTorch Transformer-VAE model.\n\nUses an MMD loss to prevent posterior collapse.\n\nWill setup in the next month or so.",
"## ToDo\n- [ ] Copy in old repo code.\n- [ ] Make a bunch of sample training runs.\n- [ ] Make an interpolation widget?"
] | [
"TAGS\n#region-us \n",
"# Transformer-VAE (WIP)\n\nA PyTorch Transformer-VAE model.\n\nUses an MMD loss to prevent posterior collapse.\n\nWill setup in the next month or so.",
"## ToDo\n- [ ] Copy in old repo code.\n- [ ] Make a bunch of sample training runs.\n- [ ] Make an interpolation widget?"
] |