pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null | from transformers import GPTNeoForCausalLM, GPT2Tokenizer
model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B")
tokenizer = GPT2Tokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
prompt = "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " \
... "previously unexplored valley, in the Andes Mountains. Even more surprising to the " \
... "researchers was the fact that the unicorns spoke perfect English."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(input_ids, do_sample=True, temperature=0.9, max_length=100,)
gen_text = tokenizer.batch_decode(gen_tokens)[0] | {} | Begimay/Task | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| from transformers import GPTNeoForCausalLM, GPT2Tokenizer
model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B")
tokenizer = GPT2Tokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
prompt = "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " \
... "previously unexplored valley, in the Andes Mountains. Even more surprising to the " \
... "researchers was the fact that the unicorns spoke perfect English."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(input_ids, do_sample=True, temperature=0.9, max_length=100,)
gen_text = tokenizer.batch_decode(gen_tokens)[0] | [] | [
"TAGS\n#region-us \n"
] |
text-generation | transformers | \ntags:
-conversational
inference: false
conversational: true
#First time chat bot using a guide, low epoch count due to limited resources. | {} | BenWitter/DialoGPT-small-Tyrion | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| \ntags:
-conversational
inference: false
conversational: true
#First time chat bot using a guide, low epoch count due to limited resources. | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-hindi-colab", "results": []}]} | Bharathdamu/wav2vec2-large-xls-r-300m-hindi-colab | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-large-xls-r-300m-hindi-colab
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| [
"# wav2vec2-large-xls-r-300m-hindi-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-large-xls-r-300m-hindi-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-hindi", "results": []}]} | Bharathdamu/wav2vec2-large-xls-r-300m-hindi | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-large-xls-r-300m-hindi
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| [
"# wav2vec2-large-xls-r-300m-hindi\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-large-xls-r-300m-hindi\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-sst2
This model was trained from scratch on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3000
- Accuracy: 0.9450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.1106 | 1.0 | 4210 | 0.9255 | 0.3326 |
| 0.1497 | 2.0 | 8420 | 0.9369 | 0.2858 |
| 0.1028 | 3.0 | 12630 | 0.3128 | 0.9335 |
| 0.0872 | 4.0 | 16840 | 0.3000 | 0.9450 |
| 0.0571 | 5.0 | 21050 | 0.3378 | 0.9427 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "roberta-base-finetuned-sst2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "sst2"}, "metrics": [{"type": "accuracy", "value": 0.944954128440367, "name": "Accuracy"}]}]}]} | Bhumika/roberta-base-finetuned-sst2 | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-glue #model-index #autotrain_compatible #endpoints_compatible #region-us
| roberta-base-finetuned-sst2
===========================
This model was trained from scratch on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3000
* Accuracy: 0.9450
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-glue #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
text2text-generation | transformers |
# Spell checker using T5 base transformer
A simple spell checker built using T5-Base transformer. To use this model
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Bhuvana/t5-base-spellchecker")
model = AutoModelForSeq2SeqLM.from_pretrained("Bhuvana/t5-base-spellchecker")
def correct(inputs):
input_ids = tokenizer.encode(inputs,return_tensors='pt')
sample_output = model.generate(
input_ids,
do_sample=True,
max_length=50,
top_p=0.99,
num_return_sequences=1
)
res = tokenizer.decode(sample_output[0], skip_special_tokens=True)
return res
text = "christmas is celbrated on decembr 25 evry ear"
print(correct(text))
```
This should print the corrected statement
```
christmas is celebrated on december 25 every year
```
You can also type the text under the Hosted inference API and get predictions online.
| {"widget": [{"text": "christmas is celbrated on decembr 25 evry ear"}]} | Bhuvana/t5-base-spellchecker | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Spell checker using T5 base transformer
A simple spell checker built using T5-Base transformer. To use this model
This should print the corrected statement
You can also type the text under the Hosted inference API and get predictions online.
| [
"# Spell checker using T5 base transformer\nA simple spell checker built using T5-Base transformer. To use this model \n\n\n\nThis should print the corrected statement\n\n\nYou can also type the text under the Hosted inference API and get predictions online."
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Spell checker using T5 base transformer\nA simple spell checker built using T5-Base transformer. To use this model \n\n\n\nThis should print the corrected statement\n\n\nYou can also type the text under the Hosted inference API and get predictions online."
] |
text-generation | transformers | #hi | {"tags": ["conversational"]} | Biasface/DDDC | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| #hi | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | #hi | {"tags": ["conversational"]} | Biasface/DDDC2 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| #hi | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
fill-mask | transformers | ``````
!pip install transformers
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = AutoModelForMaskedLM.from_pretrained("BigSalmon/BertaMyWorda")
`````` | {} | BigSalmon/BertaMyWorda | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
!pip install transformers
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = AutoModelForMaskedLM.from_pretrained("BigSalmon/BertaMyWorda")
| [] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | https://huggingface.co/spaces/BigSalmon/MASK2 | {} | BigSalmon/FormalBerta3 | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| URL | [] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | https://huggingface.co/spaces/BigSalmon/MASK2 | {} | BigSalmon/FormalRobertaa | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #has_space #region-us
| URL | [] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
fill-mask | transformers | https://huggingface.co/spaces/BigSalmon/MASK2 | {} | BigSalmon/FormalRobertaaa | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| URL | [] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation | transformers | Trained on this model: https://huggingface.co/xhyi/PT_GPTNEO350_ATG/tree/main
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
``` | {} | BigSalmon/GPTNeo350MInformalToFormalLincoln | null | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us
| Trained on this model: URL
| [] | [
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text-generation | transformers | Trained on this model: https://huggingface.co/xhyi/PT_GPTNEO350_ATG/tree/main
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
``` | {} | BigSalmon/GPTNeo350MInformalToFormalLincoln2 | null | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us
| Trained on this model: URL
| [] | [
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text-generation | transformers | Trained on this model: https://huggingface.co/xhyi/PT_GPTNEO350_ATG/tree/main
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln3")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln3")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel.
Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle.
Essay Intro (
``` | {} | BigSalmon/GPTNeo350MInformalToFormalLincoln3 | null | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us
| Trained on this model: URL
| [] | [
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text-generation | transformers | Trained on this model: https://huggingface.co/xhyi/PT_GPTNEO350_ATG/tree/main
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln3")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln3")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel.
Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle.
Essay Intro (
``` | {} | BigSalmon/GPTNeo350MInformalToFormalLincoln4 | null | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us
| Trained on this model: URL
| [] | [
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text-generation | transformers | Trained on this model: https://huggingface.co/xhyi/PT_GPTNEO350_ATG/tree/main
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln3")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln3")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel.
Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle.
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
``` | {} | BigSalmon/GPTNeo350MInformalToFormalLincoln5 | null | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us
| Trained on this model: URL
| [] | [
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text-generation | transformers | Trained on this model: https://huggingface.co/xhyi/PT_GPTNEO350_ATG/tree/main
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln6")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln6")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel.
Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle.
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
``` | {} | BigSalmon/GPTNeo350MInformalToFormalLincoln6 | null | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us
| Trained on this model: URL
| [] | [
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text-generation | transformers | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InfillFormalLincoln")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InfillFormalLincoln")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2Space (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
````
```
infill: increasing the number of sidewalks in suburban areas will [MASK].
Translated into the Style of Abraham Lincoln: increasing the number of sidewalks in suburban areas will ( ( enhance / maximize ) community cohesion / facilitate ( communal ties / the formation of neighborhood camaraderie ) / forge neighborly relations / lend themselves to the advancement of neighborly ties / plant the seeds of community building / flower anew the bonds of friendship / invite the budding of neighborhood rapport / enrich neighborhood life ).
infill: corn fields [MASK], [MASK] visibly as one ventures beyond chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), ( manifesting themselves ) visibly as one ventures beyond chicago.
infill: the [MASK] the SAT will soon be [MASK]. [MASK] an examination undertaken on one's laptop. [MASK] will allow students to retrieve test results promptly.
Translated into the Style of Abraham Lincoln: the ( conventional form of ) the SAT will soon be ( consigned to history ). ( replacing it will be ) an examination undertaken on one's laptop. ( so doing ) will allow students to retrieve test results promptly.
infill:
``` | {} | BigSalmon/InfillFormalLincoln | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Informal to Formal:
'
| [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln14")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln14")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english:
```` | {} | BigSalmon/InformalToFormalLincoln14 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Informal to Formal:
' | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln15")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln15")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
The guys were ( enlisted to spearhead the cause / tasked with marshaling the movement forward / charged with driving the initiative onward / vested with the assignment of forwarding the mission)
informal english: friday should no longer be a workday, but a day added to the weekend, suffusing people with the ability to spend time with their families.
Translated into the Style of Abraham Lincoln: the weekend should come to include friday, ( broadening the window of time for one to be in the company of their family / ( multiplying / swelling / turbocharging / maximizing ) the interval for one to ( reconnect with / feel the warmth of ) their loved ones ).
informal english:
````
| {} | BigSalmon/InformalToFormalLincoln15 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Informal to Formal:
'
| [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln16")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln16")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```` | {} | BigSalmon/InformalToFormalLincoln16 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Informal to Formal:
' | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln17")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln17")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```` | {} | BigSalmon/InformalToFormalLincoln17 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Informal to Formal:
' | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln18")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln18")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2Space (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```` | {} | BigSalmon/InformalToFormalLincoln18 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Informal to Formal:
' | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln19")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln19")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2Space (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
````
```
###
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
###
- with 2,000,000 individual articles on everything
- wikipedia is the #8 site on the world wide web
- created by anyone with access to a computer
- growing at fast rate
- proof that collaborative community-based projects are the future
Text: encompassing a staggering 2,000,000 articles on every subject conceivable, wikipedia is the 8th most visited website in the world. borne of the collective efforts of anyone with an internet connection, its contents are increasing exponentially. most compellingly, however, this effort is an affirmation that community-based initiatives is the future.
###
-
``` | {} | BigSalmon/InformalToFormalLincoln19 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Informal to Formal:
'
| [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Informal to Formal:
Wordy to Concise:
Fill Missing Phrase:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln20")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln20")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
````
```
infill: increasing the number of sidewalks in suburban areas will [MASK].
Translated into the Style of Abraham Lincoln: increasing the number of sidewalks in suburban areas will ( ( enhance / maximize ) community cohesion / facilitate ( communal ties / the formation of neighborhood camaraderie ) / forge neighborly relations / lend themselves to the advancement of neighborly ties / plant the seeds of community building / flower anew the bonds of friendship / invite the budding of neighborhood rapport / enrich neighborhood life ).
infill: corn fields [MASK], [MASK] visibly as one ventures beyond chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), ( manifesting themselves ) visibly as one ventures beyond chicago.
infill: the [MASK] the SAT will soon be [MASK]. [MASK] an examination undertaken on one's laptop. [MASK] will allow students to retrieve test results promptly.
Translated into the Style of Abraham Lincoln: the ( conventional form of ) the SAT will soon be ( consigned to history ). ( replacing it will be ) an examination undertaken on one's laptop. ( so doing ) will allow students to retrieve test results promptly.
infill:
```
```
***
wordy: chancing upon a linux user is a rare occurrence in the present day.
Translate into Concise Text: present-day linux users are rare.
***
wordy: an interest in classical music is becoming more and more less popular.
Translate into Concise Text: classical music appreciation is dwindling.
Translate into Concise Text: waning interest in classic music persists.
Translate into Concise Text: interest in classic music is fading.
***
wordy: the ice cream was only one dollar, but it was not a good value for the size.
Translate into Concise Text: the one dollar ice cream was overpriced for its size.
Translate into Concise Text: overpriced, the one dollar ice cream was small.
***
wordy:
``` | {} | BigSalmon/InformalToFormalLincoln20 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Informal to Formal:
Wordy to Concise:
Fill Missing Phrase:
'
| [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Informal to Formal:
Wordy to Concise:
Fill Missing Phrase:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln21")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln21")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
````
```
infill: increasing the number of sidewalks in suburban areas will [MASK].
Translated into the Style of Abraham Lincoln: increasing the number of sidewalks in suburban areas will ( ( enhance / maximize ) community cohesion / facilitate ( communal ties / the formation of neighborhood camaraderie ) / forge neighborly relations / lend themselves to the advancement of neighborly ties / plant the seeds of community building / flower anew the bonds of friendship / invite the budding of neighborhood rapport / enrich neighborhood life ).
infill: corn fields [MASK], [MASK] visibly as one ventures beyond chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), ( manifesting themselves ) visibly as one ventures beyond chicago.
infill: the [MASK] the SAT will soon be [MASK]. [MASK] an examination undertaken on one's laptop. [MASK] will allow students to retrieve test results promptly.
Translated into the Style of Abraham Lincoln: the ( conventional form of ) the SAT will soon be ( consigned to history ). ( replacing it will be ) an examination undertaken on one's laptop. ( so doing ) will allow students to retrieve test results promptly.
infill:
```
```
***
wordy: chancing upon a linux user is a rare occurrence in the present day.
Translate into Concise Text: present-day linux users are rare.
***
wordy: an interest in classical music is becoming more and more less popular.
Translate into Concise Text: classical music appreciation is dwindling.
Translate into Concise Text: waning interest in classic music persists.
Translate into Concise Text: interest in classic music is fading.
***
wordy: the ice cream was only one dollar, but it was not a good value for the size.
Translate into Concise Text: the one dollar ice cream was overpriced for its size.
Translate into Concise Text: overpriced, the one dollar ice cream was small.
***
wordy:
``` | {} | BigSalmon/InformalToFormalLincoln21 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
| Informal to Formal:
Wordy to Concise:
Fill Missing Phrase:
'
| [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text-generation | transformers | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincolnDistilledGPT2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincolnDistilledGPT2")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english:
```` | {} | BigSalmon/InformalToFormalLincolnDistilledGPT2 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Informal to Formal:
' | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln10")
```
```
How To Make Prompt:
Original: freedom of the press is a check against political corruption.
Edited: fundamental to the spirit of democracy, freedom of the press is a check against political corruption.
Edited 2: ever at odds with tyranny, freedom of the press is a check against political corruption.
Edited 3: never to be neglected, freedom of the press is a check against political corruption.
Original: solar is a beacon of achievement.
Edited: central to decoupling from the perils of unsustainable energy, solar is a beacon of achievement.
Edited 2: key to a future beyond fossil fuels, solar is a beacon of achievement.
Original: milan is nevertheless ambivalent towards his costly terms.
Edited: keen on contracting him, milan is nevertheless ambivalent towards his costly terms.
Edited 2: intent on securing his services, milan is nevertheless ambivalent towards his costly terms.
Original:
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: meteors are much harder to see, because they are only there for a fraction of a second.
Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second.
informal english:
```` | {} | BigSalmon/MrLincoln10 | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Informal to Formal:
' | [] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln11")
```
```
How To Make Prompt:
Original: freedom of the press is a check against political corruption.
Edited: fundamental to the spirit of democracy, freedom of the press is a check against political corruption.
Edited 2: ever at odds with tyranny, freedom of the press is a check against political corruption.
Edited 3: never to be neglected, freedom of the press is a check against political corruption.
Original: solar is a beacon of achievement.
Edited: central to decoupling from the perils of unsustainable energy, solar is a beacon of achievement.
Edited 2: key to a future beyond fossil fuels, solar is a beacon of achievement.
Original: milan is nevertheless ambivalent towards his costly terms.
Edited: keen on contracting him, milan is nevertheless ambivalent towards his costly terms.
Edited 2: intent on securing his services, milan is nevertheless ambivalent towards his costly terms.
Original:
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: meteors are much harder to see, because they are only there for a fraction of a second.
Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second.
informal english:
```` | {} | BigSalmon/MrLincoln11 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Informal to Formal:
' | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln12")
```
```
https://huggingface.co/spaces/BigSalmon/InformalToFormal
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: meteors are much harder to see, because they are only there for a fraction of a second.
Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second.
informal english:
```` | {} | BigSalmon/MrLincoln12 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
| Informal to Formal:
' | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text-generation | transformers | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/MrLincoln125MNeo")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln125MNeo")
```
```
https://huggingface.co/spaces/BigSalmon/InformalToFormal
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: meteors are much harder to see, because they are only there for a fraction of a second.
Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second.
informal english:
```` | {} | BigSalmon/MrLincoln125MNeo | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us
| Informal to Formal:
' | [] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation | transformers | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln13")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: meteors are much harder to see, because they are only there for a fraction of a second.
Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second.
informal english:
```` | {} | BigSalmon/MrLincoln13 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Informal to Formal:
' | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln5")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english:
```` | {} | BigSalmon/MrLincoln5 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Informal to Formal:
' | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln6")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: meteors are much harder to see, because they are only there for a fraction of a second.
Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second.
informal english:
```` | {} | BigSalmon/MrLincoln6 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Informal to Formal:
' | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln7")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: meteors are much harder to see, because they are only there for a fraction of a second.
Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second.
informal english:
```` | {} | BigSalmon/MrLincoln8 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Informal to Formal:
' | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
fill-mask | transformers | Example Prompt:
```
informal english: things are better when they are open source, because they are constantly being updated to enhance experience.
Translated into the Style of Abraham Lincoln: in the open-source paradigm, code is ( ceaselessly / perpetually ) being ( reengineered / revamped / polished ), thereby ( advancing / enhancing / optimizing / <mask> ) the user experience.
```
Demo: https://huggingface.co/spaces/BigSalmon/MASK2 | {} | BigSalmon/MrLincolnBerta | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #has_space #region-us
| Example Prompt:
Demo: URL | [] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text-generation | transformers | This can be used to paraphrase. I recommend using the code I have attached below. You can generate it without using LogProbs, but you are likely to be best served by manually examining the most likely outputs.
If this interests you, check out https://huggingface.co/BigSalmon/MrLincoln12 or my other MrLincoln repos.
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/Parentheses")
```
Example Prompt:
```
***
The Milwaukee Bucks are for sure in title contention.
The Milwaukee Bucks are ( virtually assured / all but certain to / on the cusp of / well positioned for ) title contention.
***
Discord is an up-and-coming platform, attracting people from all walks of life.
Discord is ( an / a ) ( up-and-coming platform / platform in the ascendant / medium on the rise ), ( drawing in / wooing / winning over ) ( people / individuals / consumers / audiences ) from all ( walks of life / corners of the universe / horizons )...
***
HuggingFace is an amazing company.
HuggingFace is an (
```
```
import torch
prompt = "Insert Your Prompt Here. It is Best To Have a Few Examples Before Like The Example Prompt Shows."
text = tokenizer.encode(prompt)
myinput, past_key_values = torch.tensor([text]), None
myinput = myinput
myinput= myinput.to(device)
logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(500)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
for i in range(500):
m = ([best_words[i]])
m = str(m)
m = m.replace("[' ", "").replace("']", "")
print(m)
``` | {} | BigSalmon/ParaphraseParentheses | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| This can be used to paraphrase. I recommend using the code I have attached below. You can generate it without using LogProbs, but you are likely to be best served by manually examining the most likely outputs.
If this interests you, check out URL or my other MrLincoln repos.
Example Prompt:
| [] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | This can be used to paraphrase. I recommend using the code I have attached below. You can generate it without using LogProbs, but you are likely to be best served by manually examining the most likely outputs.
If this interests you, check out https://huggingface.co/BigSalmon/MrLincoln12 or my other MrLincoln repos.
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/ParaphraseParentheses2.0")
```
Example Prompt:
```
the nba is [mask] [mask] viewership.
the nba is ( facing / witnessing / confronted with / suffering from / grappling with ) ( lost / tanking ) viewership...
ai is certain to [mask] the third industrial revolution.
ai is certain to ( breed / catalyze / inaugurate / catalyze / usher in / call forth / turn loose / lend its name to ) the third industrial revolution.
the modern-day knicks are a disgrace to [mask].
the modern-day knicks are a disgrace to the franchise's ( rich legacy / tradition of excellence / uniquely distinguished record ).
HuggingFace is [mask].
HuggingFace is ( an amazing company /
```
```
import torch
prompt = "Insert Your Prompt Here. It is Best To Have a Few Examples Before Like The Example Prompt Shows."
text = tokenizer.encode(prompt)
myinput, past_key_values = torch.tensor([text]), None
myinput = myinput
myinput= myinput.to(device)
logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(500)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
for i in range(500):
m = ([best_words[i]])
m = str(m)
m = m.replace("[' ", "").replace("']", "")
print(m)
``` | {} | BigSalmon/ParaphraseParentheses2.0 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| This can be used to paraphrase. I recommend using the code I have attached below. You can generate it without using LogProbs, but you are likely to be best served by manually examining the most likely outputs.
If this interests you, check out URL or my other MrLincoln repos.
Example Prompt:
| [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Converting Points to Paragraphs
Example Prompts:
```
###
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
###
- with 2,000,000 individual articles on everything
- wikipedia is the #8 site on the world wide web
- created by anyone with access to a computer
- growing at fast rate
- proof that collaborative community-based projects are the future
Text: encompassing a staggering 2,000,000 articles on every subject conceivable, wikipedia is the 8th most visited website in the world. borne of the collective efforts of anyone with an internet connection, its contents are increasing exponentially. most compellingly, however, this effort is an affirmation that community-based initiatives is the future.
###
-
``` | {} | BigSalmon/Points | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
| Converting Points to Paragraphs
Example Prompts:
| [] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text-generation | transformers | Converting Points or Headlines to Paragraphs
Example Prompts:
```
###
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
###
- with 2,000,000 individual articles on everything
- wikipedia is the #8 site on the world wide web
- created by anyone with access to a computer
- growing at fast rate
- proof that collaborative community-based projects are the future
Text: encompassing a staggering 2,000,000 articles on every subject conceivable, wikipedia is the 8th most visited website in the world. borne of the collective efforts of anyone with an internet connection, its contents are increasing exponentially. most compellingly, however, this effort is an affirmation that community-based initiatives is the future.
###
-
```
```
Essay Intro (Sega Centers Classics): unyielding in its insistence on consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. this is a task that not even the most devoted fan could have foreseen.
***
Essay Intro (Blizzard Shows Video Games Are An Art): universally adored, video games have come to be revered not only as interactive diversions, but as artworks. a firm believer in this doctrine, blizzard actively works to further the craft of storytelling in their respective titles.
***
Essay Intro (What Happened To Linux): chancing upon a linux user is a rare occurrence in the present day. once a mainstay, the brand has come to only be seen in the hands of the most ardent of its followers.
``` | {} | BigSalmon/Points2 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
| Converting Points or Headlines to Paragraphs
Example Prompts:
| [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text-generation | transformers | - All credit goes to https://huggingface.co/philippelaban/keep_it_simple.
- This is a copy of their repository for future training purposes.
- It is supposed to simplify text.
- Their model card gives instructions on how to use it. | {} | BigSalmon/SimplifyText | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| - All credit goes to URL
- This is a copy of their repository for future training purposes.
- It is supposed to simplify text.
- Their model card gives instructions on how to use it. | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Megumin model | {"tags": ["conversational"]} | BigTooth/DialoGPT-Megumin | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Megumin model | [
"# Megumin model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Megumin model"
] |
text-generation | transformers |
# Tohru DialoGPT model | {"tags": ["conversational"]} | BigTooth/DialoGPT-small-tohru | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Tohru DialoGPT model | [
"# Tohru DialoGPT model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Tohru DialoGPT model"
] |
text-generation | transformers |
# Megumin-v0.2 model | {"tags": ["conversational"]} | BigTooth/Megumin-v0.2 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Megumin-v0.2 model | [
"# Megumin-v0.2 model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Megumin-v0.2 model"
] |
text-generation | transformers |
#Rick Sanchez DialoGPT Model | {"tags": ["conversational"]} | BigeS/DialoGPT-small-Rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Rick Sanchez DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jplu-wikiann
This model is a fine-tuned version of [jplu/tf-camembert-base](https://huggingface.co/jplu/tf-camembert-base) on the wikiann dataset.
It achieves the following results on the evaluation set:
- precision: 0.8980
- recall: 0.9097
- f1: 0.9038
- accuracy: 0.9464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- num_train_epochs: 5
- train_batch_size: 16
- eval_batch_size: 32
- learning_rate: 2e-05
- weight_decay_rate: 0.01
- num_warmup_steps: 0
- fp16: True
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| {"language": ["fr"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "jplu-wikiann", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "default"}, "metrics": [{"type": "precision", "value": 0.897994120055078, "name": "precision"}, {"type": "recall", "value": 0.9097421203438395, "name": "recall"}, {"type": "f1", "value": 0.9038299466242158, "name": "f1"}, {"type": "accuracy", "value": 0.9464171271196716, "name": "accuracy"}]}]}]} | BillelBenoudjit/jplu-wikiann | null | [
"fr",
"dataset:wikiann",
"model-index",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fr"
] | TAGS
#fr #dataset-wikiann #model-index #region-us
|
# jplu-wikiann
This model is a fine-tuned version of jplu/tf-camembert-base on the wikiann dataset.
It achieves the following results on the evaluation set:
- precision: 0.8980
- recall: 0.9097
- f1: 0.9038
- accuracy: 0.9464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- num_train_epochs: 5
- train_batch_size: 16
- eval_batch_size: 32
- learning_rate: 2e-05
- weight_decay_rate: 0.01
- num_warmup_steps: 0
- fp16: True
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| [
"# jplu-wikiann\n\nThis model is a fine-tuned version of jplu/tf-camembert-base on the wikiann dataset.\nIt achieves the following results on the evaluation set:\n- precision: 0.8980\n- recall: 0.9097\n- f1: 0.9038\n- accuracy: 0.9464",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- num_train_epochs: 5\n- train_batch_size: 16\n- eval_batch_size: 32\n- learning_rate: 2e-05\n- weight_decay_rate: 0.01\n- num_warmup_steps: 0\n- fp16: True",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.0\n- Tokenizers 0.10.3"
] | [
"TAGS\n#fr #dataset-wikiann #model-index #region-us \n",
"# jplu-wikiann\n\nThis model is a fine-tuned version of jplu/tf-camembert-base on the wikiann dataset.\nIt achieves the following results on the evaluation set:\n- precision: 0.8980\n- recall: 0.9097\n- f1: 0.9038\n- accuracy: 0.9464",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- num_train_epochs: 5\n- train_batch_size: 16\n- eval_batch_size: 32\n- learning_rate: 2e-05\n- weight_decay_rate: 0.01\n- num_warmup_steps: 0\n- fp16: True",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.0\n- Tokenizers 0.10.3"
] |
text-generation | transformers |
# Neku from Twewy | {"tags": ["conversational"]} | Bimal/my_bot_model | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Neku from Twewy | [
"# Neku from Twewy"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Neku from Twewy"
] |
translation | transformers | ### en_ti_translate
* source languages: en
* target languages: ti
* model: hugging face transformer seq2seq
* base model : opus-mt-en-ti
* pre-processing: normalization + SentencePiece
### documentation
https://tigrinyanlp.github.io/
| {"tags": ["translation"]} | Biniam/en_ti_translate | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #marian #text2text-generation #translation #autotrain_compatible #endpoints_compatible #region-us
| ### en_ti_translate
* source languages: en
* target languages: ti
* model: hugging face transformer seq2seq
* base model : opus-mt-en-ti
* pre-processing: normalization + SentencePiece
### documentation
URL
| [
"### en_ti_translate\n* source languages: en\n* target languages: ti\n* model: hugging face transformer seq2seq\n* base model : opus-mt-en-ti\n* pre-processing: normalization + SentencePiece",
"### documentation\nURL"
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #autotrain_compatible #endpoints_compatible #region-us \n",
"### en_ti_translate\n* source languages: en\n* target languages: ti\n* model: hugging face transformer seq2seq\n* base model : opus-mt-en-ti\n* pre-processing: normalization + SentencePiece",
"### documentation\nURL"
] |
text-generation | transformers |
# My Awesome Model | {"tags": ["conversational"]} | BinksSachary/DialoGPT-small-shaxx | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Awesome Model | [
"# My Awesome Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Model"
] |
text-generation | transformers |
# My Awesome Model | {"tags": ["conversational"]} | BinksSachary/ShaxxBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Awesome Model | [
"# My Awesome Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Model"
] |
text-generation | transformers |
# My Awesome Model
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
| {"tags": ["conversational"]} | BinksSachary/ShaxxBot2 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Awesome Model
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = URL(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = URL([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("JoshuaBot: {}".format(URL(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
| [
"# My Awesome Model\n\nfrom transformers import AutoTokenizer, AutoModelWithLMHead\n\ntokenizer = AutoTokenizer.from_pretrained(\"r3dhummingbird/DialoGPT-medium-joshua\")\n\nmodel = AutoModelWithLMHead.from_pretrained(\"r3dhummingbird/DialoGPT-medium-joshua\")",
"# Let's chat for 4 lines\nfor step in range(4):\n # encode the new user input, add the eos_token and return a tensor in Pytorch\n new_user_input_ids = URL(input(\">> User:\") + tokenizer.eos_token, return_tensors='pt')\n # print(new_user_input_ids)\n\n # append the new user input tokens to the chat history\n bot_input_ids = URL([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids\n\n # generated a response while limiting the total chat history to 1000 tokens, \n chat_history_ids = model.generate(\n bot_input_ids, max_length=200,\n pad_token_id=tokenizer.eos_token_id, \n no_repeat_ngram_size=3, \n do_sample=True, \n top_k=100, \n top_p=0.7,\n temperature=0.8\n )\n\n # pretty print last ouput tokens from bot\n print(\"JoshuaBot: {}\".format(URL(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Model\n\nfrom transformers import AutoTokenizer, AutoModelWithLMHead\n\ntokenizer = AutoTokenizer.from_pretrained(\"r3dhummingbird/DialoGPT-medium-joshua\")\n\nmodel = AutoModelWithLMHead.from_pretrained(\"r3dhummingbird/DialoGPT-medium-joshua\")",
"# Let's chat for 4 lines\nfor step in range(4):\n # encode the new user input, add the eos_token and return a tensor in Pytorch\n new_user_input_ids = URL(input(\">> User:\") + tokenizer.eos_token, return_tensors='pt')\n # print(new_user_input_ids)\n\n # append the new user input tokens to the chat history\n bot_input_ids = URL([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids\n\n # generated a response while limiting the total chat history to 1000 tokens, \n chat_history_ids = model.generate(\n bot_input_ids, max_length=200,\n pad_token_id=tokenizer.eos_token_id, \n no_repeat_ngram_size=3, \n do_sample=True, \n top_k=100, \n top_p=0.7,\n temperature=0.8\n )\n\n # pretty print last ouput tokens from bot\n print(\"JoshuaBot: {}\".format(URL(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hackMIT-finetuned-sst2
This model is a fine-tuned version of [Blaine-Mason/hackMIT-finetuned-sst2](https://huggingface.co/Blaine-Mason/hackMIT-finetuned-sst2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1086
- Accuracy: 0.8028
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.033238621168611e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 30
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0674 | 1.0 | 4210 | 1.1086 | 0.8028 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "hackMIT-finetuned-sst2", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "glue", "type": "glue", "args": "sst2"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.8027522935779816}}]}]} | Blaine-Mason/hackMIT-finetuned-sst2 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #autotrain_compatible #endpoints_compatible #region-us
| hackMIT-finetuned-sst2
======================
This model is a fine-tuned version of Blaine-Mason/hackMIT-finetuned-sst2 on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1086
* Accuracy: 0.8028
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2.033238621168611e-06
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 30
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.9.0+cu102
* Datasets 1.11.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2.033238621168611e-06\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 30\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2.033238621168611e-06\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 30\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
# A new medium model based on the character Makise Kurisu from Steins;Gate.
# Still has some issues that were present in the previous model, for example, mixing lines from other characters.
# If you have any questions, feel free to ask me on discord: BlightZz#1169 | {"tags": ["conversational"]} | BlightZz/DialoGPT-medium-Kurisu | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# A new medium model based on the character Makise Kurisu from Steins;Gate.
# Still has some issues that were present in the previous model, for example, mixing lines from other characters.
# If you have any questions, feel free to ask me on discord: BlightZz#1169 | [
"# A new medium model based on the character Makise Kurisu from Steins;Gate.",
"# Still has some issues that were present in the previous model, for example, mixing lines from other characters.",
"# If you have any questions, feel free to ask me on discord: BlightZz#1169"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# A new medium model based on the character Makise Kurisu from Steins;Gate.",
"# Still has some issues that were present in the previous model, for example, mixing lines from other characters.",
"# If you have any questions, feel free to ask me on discord: BlightZz#1169"
] |
text-generation | transformers |
# A small model based on the character Makise Kurisu from Steins;Gate. This was made as a test.
# A new medium model was made using her lines, I also added some fixes. It can be found here:
# https://huggingface.co/BlightZz/DialoGPT-medium-Kurisu | {"tags": ["conversational"]} | BlightZz/MakiseKurisu | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# A small model based on the character Makise Kurisu from Steins;Gate. This was made as a test.
# A new medium model was made using her lines, I also added some fixes. It can be found here:
# URL | [
"# A small model based on the character Makise Kurisu from Steins;Gate. This was made as a test.",
"# A new medium model was made using her lines, I also added some fixes. It can be found here:",
"# URL"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# A small model based on the character Makise Kurisu from Steins;Gate. This was made as a test.",
"# A new medium model was made using her lines, I also added some fixes. It can be found here:",
"# URL"
] |
text-classification | transformers |
Dataset Link - https://www.kaggle.com/rmisra/news-headlines-dataset-for-sarcasm-detection | {"language": ["English"], "tags": ["Text", "Sequence-Classification", "Sarcasm", "DistilBert"], "datasets": ["Kaggle Dataset"], "metrics": ["precision", "recall", "f1"]} | BlindMan820/Sarcastic-News-Headlines | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"Text",
"Sequence-Classification",
"Sarcasm",
"DistilBert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"English"
] | TAGS
#transformers #pytorch #distilbert #text-classification #Text #Sequence-Classification #Sarcasm #DistilBert #autotrain_compatible #endpoints_compatible #region-us
|
Dataset Link - URL | [] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #Text #Sequence-Classification #Sarcasm #DistilBert #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Moragna DialoGPT Model | {"tags": ["conversational"]} | BlueGamerBeast/DialoGPT-small-Morgana | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Moragna DialoGPT Model | [
"# Moragna DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Moragna DialoGPT Model"
] |
null | transformers | # Korean bert base model for DST
- This is ConversationBert for dsksd/bert-ko-small-minimal(base-module) + 5 datasets
- Use dsksd/bert-ko-small-minimal tokenizer
- 5 datasets
- tweeter_dialogue : xlsx
- speech : trn
- office_dialogue : json
- KETI_dialogue : txt
- WOS_dataset : json
```python
tokenizer = AutoTokenizer.from_pretrained("BonjinKim/dst_kor_bert")
model = AutoModel.from_pretrained("BonjinKim/dst_kor_bert")
``` | {} | BonjinKim/dst_kor_bert | null | [
"transformers",
"pytorch",
"jax",
"bert",
"pretraining",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #bert #pretraining #endpoints_compatible #region-us
| # Korean bert base model for DST
- This is ConversationBert for dsksd/bert-ko-small-minimal(base-module) + 5 datasets
- Use dsksd/bert-ko-small-minimal tokenizer
- 5 datasets
- tweeter_dialogue : xlsx
- speech : trn
- office_dialogue : json
- KETI_dialogue : txt
- WOS_dataset : json
| [
"# Korean bert base model for DST\n\n- This is ConversationBert for dsksd/bert-ko-small-minimal(base-module) + 5 datasets\n- Use dsksd/bert-ko-small-minimal tokenizer\n- 5 datasets\n - tweeter_dialogue : xlsx\n - speech : trn\n - office_dialogue : json\n - KETI_dialogue : txt\n - WOS_dataset : json"
] | [
"TAGS\n#transformers #pytorch #jax #bert #pretraining #endpoints_compatible #region-us \n",
"# Korean bert base model for DST\n\n- This is ConversationBert for dsksd/bert-ko-small-minimal(base-module) + 5 datasets\n- Use dsksd/bert-ko-small-minimal tokenizer\n- 5 datasets\n - tweeter_dialogue : xlsx\n - speech : trn\n - office_dialogue : json\n - KETI_dialogue : txt\n - WOS_dataset : json"
] |
text-generation | transformers |
# DialoGPT Model for Penny | {"tags": ["conversational"]} | BotterHax/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# DialoGPT Model for Penny | [
"# DialoGPT Model for Penny"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# DialoGPT Model for Penny"
] |
text-classification | transformers |
# British Library Books Genre Detector
**Note** this model card is a work in progress.
## Model description
This fine-tuned [`distilbert-base-cased`](https://huggingface.co/distilbert-base-cased) model is trained to predict whether a book from the [British Library's](https://www.bl.uk/) [Digitised printed books (18th-19th century)](https://www.bl.uk/collection-guides/digitised-printed-books) book collection is `fiction` or `non-fiction` based on the title of the book.
## Intended uses & limitations
This model was trained on data created from the [Digitised printed books (18th-19th century)](https://www.bl.uk/collection-guides/digitised-printed-books) book collection. The datasets in this collection are comprised and derived from 49,455 digitised books (65,227 volumes) largely from the 19th Century. This dataset is dominated by English language books but also includes books in a number of other languages in much smaller numbers. Whilst a subset of this data has metadata relating to Genre, the majority of this dataset does not currently contain this information.
This model was originally developed for use as part of the [Living with Machines](https://livingwithmachines.ac.uk/) project in order to be able to 'segment' this large dataset of books into different categories based on a 'crude' classification of genre i.e. whether the title was `fiction` or `non-fiction`.
Particular areas where the model might be limited are:
### Title format
The model's training data (discussed more below) primarily consists of 19th Century book titles that have been catalogued according to British Library cataloguing practices. Since the approaches taken to cataloguing will vary across institutions running the model on titles from a different catalogue might introduce domain drift and lead to degraded model performance.
To give an example of the types of titles includes in the training data here are 20 random examples:
- 'The Canadian farmer. A missionary incident [Signed: W. J. H. Y, i.e. William J. H. Yates.]
- 'A new musical Interlude, called the Election [By M. P. Andrews.]',
- 'An Elegy written among the ruins of an Abbey. By the author of the Nun [E. Jerningham]',
- "The Baron's Daughter. A ballad by the author of Poetical Recreations [i.e. William C. Hazlitt] . F.P",
- 'A Little Book of Verse, etc',
- 'The Autumn Leaf Poems',
- 'The Battle of Waterloo, a poem',
- 'Maximilian, and other poems, etc',
- 'Fabellæ mostellariæ: or Devonshire and Wiltshire stories in verse; including specimens of the Devonshire dialect',
- 'The Grave of a Hamlet and other poems, chiefly of the Hebrides ... Selected, with an introduction, by his son J. Hogben']
### Date
The model was trained on data that spans the collection period of the [Digitised printed books (18th-19th century)](https://www.bl.uk/collection-guides/digitised-printed-books) book collection. This dataset covers a broad period (from 1500-1900). However, this dataset is skewed towards later years. The subset of training data i.e. data with genre annotations used to train this model has the following distribution for dates:
| | Date |
|-------|------------|
| mean | 1864.83 |
| std | 43.0199 |
| min | 1540 |
| 25% | 1847 |
| 50% | 1877 |
| 75% | 1893 |
### Language
Whilst the model is multilingual in so far as it has training data in non-English book titles, these appear much less frequently. An overview of the original training data's language counts are as follows:
| Language | Count |
|---------------------|-------|
| English | 22987 |
| Russian | 461 |
| French | 424 |
| Spanish | 366 |
| German | 347 |
| Dutch | 310 |
| Italian | 212 |
| Swedish | 186 |
| Danish | 164 |
| Hungarian | 132 |
| Polish | 112 |
| Latin | 83 |
| Greek,Modern(1453-) | 42 |
| Czech | 25 |
| Portuguese | 24 |
| Finnish | 14 |
| Serbian | 10 |
| Bulgarian | 7 |
| Icelandic | 4 |
| Irish | 4 |
| Hebrew | 2 |
| NorwegianNynorsk | 2 |
| Lithuanian | 2 |
| Slovenian | 2 |
| Cornish | 1 |
| Romanian | 1 |
| Slovak | 1 |
| Scots | 1 |
| Sanskrit | 1 |
#### How to use
There are a few different ways to use the model. To run the model locally the easiest option is to use the 🤗 Transformers [`pipelines`](https://huggingface.co/transformers/main_classes/pipelines.html):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("davanstrien/bl-books-genre")
model = AutoModelForSequenceClassification.from_pretrained("davanstrien/bl-books-genre")
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
classifier("Oliver Twist")
```
This will return a dictionary with our predicted label and score
```
[{'label': 'Fiction', 'score': 0.9980145692825317}]
```
If you intend to use this model beyond initial experimentation, it is highly recommended to create some data to validate the model's predictions. As the model was trained on a specific corpus of books titles, it is also likely to be beneficial to fine-tune the model if you want to run it across a collection of book titles that differ from those in the training corpus.
## Training data
The training data was created using the [Zooniverse platform](zooniverse.org/) and the annotations were done by cataloguers from the [British Library](https://www.bl.uk/). [Snorkel](https://github.com/snorkel-team/snorkel) was used to expand on this original training data through various labelling functions. As a result, some of the labels are *not* generated by a human. More information on the process of creating the annotations can be found [here](https://github.com/Living-with-machines/genre-classification)
## Training procedure
The model was trained using the [`blurr`](https://github.com/ohmeow/blurr) library. A notebook showing the training process can be found in [Predicting Genre with Machine Learning](https://github.com/Living-with-machines/genre-classification).
## Eval results
The results of the model on a held-out training set are:
```
precision recall f1-score support
Fiction 0.88 0.97 0.92 296
Non-Fiction 0.98 0.93 0.95 554
accuracy 0.94 850
macro avg 0.93 0.95 0.94 850
weighted avg 0.95 0.94 0.94 850
```
As discussed briefly in the bias and limitation sections of the model these results should be treated with caution. ** | {"language": ["multilingual", "en", "ru", "fr", "es", "de", "nl", "it", "sv", "da", "hu", "pl", "la", "el", "cs", "pt", "fi", "sr", "bg", "is", "ga", "he", "nn", "lt", "sl", "kw", "ro", "sk", "sco", "sa"], "license": "mit", "tags": ["genre", "books", "library", "historic", "glam ", "lam"], "datasets": ["TheBritishLibrary/blbooksgenre"], "metrics": ["f1"], "widget": [{"text": "Poems on various subjects. Whereto is prefixed a short essay on the structure of English verse"}, {"text": "Two Centuries of Soho: its institutions, firms, and amusements. By the Clergy of St. Anne's, Soho, J. H. Cardwell ... H. B. Freeman ... G. C. Wilton ... assisted by other contributors, etc"}, {"text": "The Adventures of Oliver Twist. [With plates.]"}]} | TheBritishLibrary/bl-books-genre | null | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"genre",
"books",
"library",
"historic",
"glam ",
"lam",
"multilingual",
"en",
"ru",
"fr",
"es",
"de",
"nl",
"it",
"sv",
"da",
"hu",
"pl",
"la",
"el",
"cs",
"pt",
"fi",
"sr",
"bg",
"is",
"ga",
"he",
"nn",
"lt",
"sl",
"kw",
"ro",
"sk",
"sco",
"sa",
"dataset:TheBritishLibrary/blbooksgenre",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual",
"en",
"ru",
"fr",
"es",
"de",
"nl",
"it",
"sv",
"da",
"hu",
"pl",
"la",
"el",
"cs",
"pt",
"fi",
"sr",
"bg",
"is",
"ga",
"he",
"nn",
"lt",
"sl",
"kw",
"ro",
"sk",
"sco",
"sa"
] | TAGS
#transformers #pytorch #safetensors #distilbert #text-classification #genre #books #library #historic #glam #lam #multilingual #en #ru #fr #es #de #nl #it #sv #da #hu #pl #la #el #cs #pt #fi #sr #bg #is #ga #he #nn #lt #sl #kw #ro #sk #sco #sa #dataset-TheBritishLibrary/blbooksgenre #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
| British Library Books Genre Detector
====================================
Note this model card is a work in progress.
Model description
-----------------
This fine-tuned 'distilbert-base-cased' model is trained to predict whether a book from the British Library's Digitised printed books (18th-19th century) book collection is 'fiction' or 'non-fiction' based on the title of the book.
Intended uses & limitations
---------------------------
This model was trained on data created from the Digitised printed books (18th-19th century) book collection. The datasets in this collection are comprised and derived from 49,455 digitised books (65,227 volumes) largely from the 19th Century. This dataset is dominated by English language books but also includes books in a number of other languages in much smaller numbers. Whilst a subset of this data has metadata relating to Genre, the majority of this dataset does not currently contain this information.
This model was originally developed for use as part of the Living with Machines project in order to be able to 'segment' this large dataset of books into different categories based on a 'crude' classification of genre i.e. whether the title was 'fiction' or 'non-fiction'.
Particular areas where the model might be limited are:
### Title format
The model's training data (discussed more below) primarily consists of 19th Century book titles that have been catalogued according to British Library cataloguing practices. Since the approaches taken to cataloguing will vary across institutions running the model on titles from a different catalogue might introduce domain drift and lead to degraded model performance.
To give an example of the types of titles includes in the training data here are 20 random examples:
* 'The Canadian farmer. A missionary incident [Signed: W. J. H. Y, i.e. William J. H. Yates.]
* 'A new musical Interlude, called the Election [By M. P. Andrews.]',
* 'An Elegy written among the ruins of an Abbey. By the author of the Nun [E. Jerningham]',
* "The Baron's Daughter. A ballad by the author of Poetical Recreations [i.e. William C. Hazlitt] . F.P",
* 'A Little Book of Verse, etc',
* 'The Autumn Leaf Poems',
* 'The Battle of Waterloo, a poem',
* 'Maximilian, and other poems, etc',
* 'Fabellæ mostellariæ: or Devonshire and Wiltshire stories in verse; including specimens of the Devonshire dialect',
* 'The Grave of a Hamlet and other poems, chiefly of the Hebrides ... Selected, with an introduction, by his son J. Hogben']
### Date
The model was trained on data that spans the collection period of the Digitised printed books (18th-19th century) book collection. This dataset covers a broad period (from 1500-1900). However, this dataset is skewed towards later years. The subset of training data i.e. data with genre annotations used to train this model has the following distribution for dates:
### Language
Whilst the model is multilingual in so far as it has training data in non-English book titles, these appear much less frequently. An overview of the original training data's language counts are as follows:
#### How to use
There are a few different ways to use the model. To run the model locally the easiest option is to use the Transformers 'pipelines':
This will return a dictionary with our predicted label and score
If you intend to use this model beyond initial experimentation, it is highly recommended to create some data to validate the model's predictions. As the model was trained on a specific corpus of books titles, it is also likely to be beneficial to fine-tune the model if you want to run it across a collection of book titles that differ from those in the training corpus.
Training data
-------------
The training data was created using the Zooniverse platform and the annotations were done by cataloguers from the British Library. Snorkel was used to expand on this original training data through various labelling functions. As a result, some of the labels are *not* generated by a human. More information on the process of creating the annotations can be found here
Training procedure
------------------
The model was trained using the 'blurr' library. A notebook showing the training process can be found in Predicting Genre with Machine Learning.
Eval results
------------
The results of the model on a held-out training set are:
As discussed briefly in the bias and limitation sections of the model these results should be treated with caution.
| [
"### Title format\n\n\nThe model's training data (discussed more below) primarily consists of 19th Century book titles that have been catalogued according to British Library cataloguing practices. Since the approaches taken to cataloguing will vary across institutions running the model on titles from a different catalogue might introduce domain drift and lead to degraded model performance.\n\n\nTo give an example of the types of titles includes in the training data here are 20 random examples:\n\n\n* 'The Canadian farmer. A missionary incident [Signed: W. J. H. Y, i.e. William J. H. Yates.]\n* 'A new musical Interlude, called the Election [By M. P. Andrews.]',\n* 'An Elegy written among the ruins of an Abbey. By the author of the Nun [E. Jerningham]',\n* \"The Baron's Daughter. A ballad by the author of Poetical Recreations [i.e. William C. Hazlitt] . F.P\",\n* 'A Little Book of Verse, etc',\n* 'The Autumn Leaf Poems',\n* 'The Battle of Waterloo, a poem',\n* 'Maximilian, and other poems, etc',\n* 'Fabellæ mostellariæ: or Devonshire and Wiltshire stories in verse; including specimens of the Devonshire dialect',\n* 'The Grave of a Hamlet and other poems, chiefly of the Hebrides ... Selected, with an introduction, by his son J. Hogben']",
"### Date\n\n\nThe model was trained on data that spans the collection period of the Digitised printed books (18th-19th century) book collection. This dataset covers a broad period (from 1500-1900). However, this dataset is skewed towards later years. The subset of training data i.e. data with genre annotations used to train this model has the following distribution for dates:",
"### Language\n\n\nWhilst the model is multilingual in so far as it has training data in non-English book titles, these appear much less frequently. An overview of the original training data's language counts are as follows:",
"#### How to use\n\n\nThere are a few different ways to use the model. To run the model locally the easiest option is to use the Transformers 'pipelines':\n\n\nThis will return a dictionary with our predicted label and score\n\n\nIf you intend to use this model beyond initial experimentation, it is highly recommended to create some data to validate the model's predictions. As the model was trained on a specific corpus of books titles, it is also likely to be beneficial to fine-tune the model if you want to run it across a collection of book titles that differ from those in the training corpus.\n\n\nTraining data\n-------------\n\n\nThe training data was created using the Zooniverse platform and the annotations were done by cataloguers from the British Library. Snorkel was used to expand on this original training data through various labelling functions. As a result, some of the labels are *not* generated by a human. More information on the process of creating the annotations can be found here\n\n\nTraining procedure\n------------------\n\n\nThe model was trained using the 'blurr' library. A notebook showing the training process can be found in Predicting Genre with Machine Learning.\n\n\nEval results\n------------\n\n\nThe results of the model on a held-out training set are:\n\n\nAs discussed briefly in the bias and limitation sections of the model these results should be treated with caution."
] | [
"TAGS\n#transformers #pytorch #safetensors #distilbert #text-classification #genre #books #library #historic #glam #lam #multilingual #en #ru #fr #es #de #nl #it #sv #da #hu #pl #la #el #cs #pt #fi #sr #bg #is #ga #he #nn #lt #sl #kw #ro #sk #sco #sa #dataset-TheBritishLibrary/blbooksgenre #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Title format\n\n\nThe model's training data (discussed more below) primarily consists of 19th Century book titles that have been catalogued according to British Library cataloguing practices. Since the approaches taken to cataloguing will vary across institutions running the model on titles from a different catalogue might introduce domain drift and lead to degraded model performance.\n\n\nTo give an example of the types of titles includes in the training data here are 20 random examples:\n\n\n* 'The Canadian farmer. A missionary incident [Signed: W. J. H. Y, i.e. William J. H. Yates.]\n* 'A new musical Interlude, called the Election [By M. P. Andrews.]',\n* 'An Elegy written among the ruins of an Abbey. By the author of the Nun [E. Jerningham]',\n* \"The Baron's Daughter. A ballad by the author of Poetical Recreations [i.e. William C. Hazlitt] . F.P\",\n* 'A Little Book of Verse, etc',\n* 'The Autumn Leaf Poems',\n* 'The Battle of Waterloo, a poem',\n* 'Maximilian, and other poems, etc',\n* 'Fabellæ mostellariæ: or Devonshire and Wiltshire stories in verse; including specimens of the Devonshire dialect',\n* 'The Grave of a Hamlet and other poems, chiefly of the Hebrides ... Selected, with an introduction, by his son J. Hogben']",
"### Date\n\n\nThe model was trained on data that spans the collection period of the Digitised printed books (18th-19th century) book collection. This dataset covers a broad period (from 1500-1900). However, this dataset is skewed towards later years. The subset of training data i.e. data with genre annotations used to train this model has the following distribution for dates:",
"### Language\n\n\nWhilst the model is multilingual in so far as it has training data in non-English book titles, these appear much less frequently. An overview of the original training data's language counts are as follows:",
"#### How to use\n\n\nThere are a few different ways to use the model. To run the model locally the easiest option is to use the Transformers 'pipelines':\n\n\nThis will return a dictionary with our predicted label and score\n\n\nIf you intend to use this model beyond initial experimentation, it is highly recommended to create some data to validate the model's predictions. As the model was trained on a specific corpus of books titles, it is also likely to be beneficial to fine-tune the model if you want to run it across a collection of book titles that differ from those in the training corpus.\n\n\nTraining data\n-------------\n\n\nThe training data was created using the Zooniverse platform and the annotations were done by cataloguers from the British Library. Snorkel was used to expand on this original training data through various labelling functions. As a result, some of the labels are *not* generated by a human. More information on the process of creating the annotations can be found here\n\n\nTraining procedure\n------------------\n\n\nThe model was trained using the 'blurr' library. A notebook showing the training process can be found in Predicting Genre with Machine Learning.\n\n\nEval results\n------------\n\n\nThe results of the model on a held-out training set are:\n\n\nAs discussed briefly in the bias and limitation sections of the model these results should be treated with caution."
] |
text-generation | transformers | #Harry Potter DialoGPT Model | {"tags": "conversational"} | Broadus20/DialoGPT-small-joshua | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| #Harry Potter DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
#DialoGPT-kungfupanda | {"tags": ["conversational"]} | BrunoNogueira/DialoGPT-kungfupanda | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#DialoGPT-kungfupanda | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Morty DialoGPT Model | {"tags": ["conversational"]} | Brykee/DialoGPT-medium-Morty | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Morty DialoGPT Model | [
"# Morty DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Morty DialoGPT Model"
] |
text-generation | transformers | # Harry Potter speech | {"tags": ["conversational"]} | Bubb-les/DisloGPT-medium-HarryPotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Harry Potter speech | [
"# Harry Potter speech"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter speech"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TRUMP
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "model_index": [{"name": "TRUMP", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]} | BumBelDumBel/TRUMP | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# TRUMP
This model is a fine-tuned version of gpt2 on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
| [
"# TRUMP\n\nThis model is a fine-tuned version of gpt2 on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# TRUMP\n\nThis model is a fine-tuned version of gpt2 on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZORK-AI-TEST
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "model_index": [{"name": "ZORK-AI-TEST", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]} | BumBelDumBel/ZORK-AI-TEST | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ZORK-AI-TEST
This model is a fine-tuned version of gpt2 on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
| [
"# ZORK-AI-TEST\n\nThis model is a fine-tuned version of gpt2 on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ZORK-AI-TEST\n\nThis model is a fine-tuned version of gpt2 on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZORK_AI_SCIFI
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model_index": [{"name": "ZORK_AI_SCIFI", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]} | BumBelDumBel/ZORK_AI_SCIFI | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ZORK_AI_SCIFI
This model is a fine-tuned version of gpt2-medium on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
| [
"# ZORK_AI_SCIFI\n\nThis model is a fine-tuned version of gpt2-medium on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ZORK_AI_SCIFI\n\nThis model is a fine-tuned version of gpt2-medium on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Tokenizers 0.10.3"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0612
- Precision: 0.9329
- Recall: 0.9517
- F1: 0.9422
- Accuracy: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0904 | 1.0 | 1756 | 0.0686 | 0.9227 | 0.9355 | 0.9291 | 0.9820 |
| 0.0385 | 2.0 | 3512 | 0.0586 | 0.9381 | 0.9490 | 0.9435 | 0.9862 |
| 0.0215 | 3.0 | 5268 | 0.0612 | 0.9329 | 0.9517 | 0.9422 | 0.9863 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9328604420983174, "name": "Precision"}, {"type": "recall", "value": 0.9516997643890945, "name": "Recall"}, {"type": "f1", "value": 0.9421859380206598, "name": "F1"}, {"type": "accuracy", "value": 0.986342497203744, "name": "Accuracy"}]}]}]} | Buntan/bert-finetuned-ner | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| bert-finetuned-ner
==================
This model is a fine-tuned version of bert-base-cased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0612
* Precision: 0.9329
* Recall: 0.9517
* F1: 0.9422
* Accuracy: 0.9863
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
token-classification | transformers | # CAMeLBERT-CA NER Model
## Model description
**CAMeLBERT-CA NER Model** is a Named Entity Recognition (NER) model that was built by fine-tuning the [CAMeLBERT Classical Arabic (CA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."
* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA NER model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component:
```python
>>> from camel_tools.ner import NERecognizer
>>> from camel_tools.tokenizers.word import simple_word_tokenize
>>> ner = NERecognizer('CAMeL-Lab/bert-base-arabic-camelbert-ca-ner')
>>> sentence = simple_word_tokenize('إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع')
>>> ner.predict_sentence(sentence)
>>> ['O', 'B-LOC', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'O']
```
You can also use the NER model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> ner = pipeline('ner', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-ner')
>>> ner("إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع")
[{'word': 'أبوظبي',
'score': 0.9895730018615723,
'entity': 'B-LOC',
'index': 2,
'start': 6,
'end': 12},
{'word': 'الإمارات',
'score': 0.8156259655952454,
'entity': 'B-LOC',
'index': 8,
'start': 33,
'end': 41},
{'word': 'العربية',
'score': 0.890906810760498,
'entity': 'I-LOC',
'index': 9,
'start': 42,
'end': 49},
{'word': 'المتحدة',
'score': 0.8169114589691162,
'entity': 'I-LOC',
'index': 10,
'start': 50,
'end': 57}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a da of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0625\u0645\u0627\u0631\u0629 \u0623\u0628\u0648\u0638\u0628\u064a \u0647\u064a \u0625\u062d\u062f\u0649 \u0625\u0645\u0627\u0631\u0627\u062a \u062f\u0648\u0644\u0629 \u0627\u0644\u0625\u0645\u0627\u0631\u0627\u062a \u0627\u0644\u0639\u0631\u0628\u064a\u0629 \u0627\u0644\u0645\u062a\u062d\u062f\u0629 \u0627\u0644\u0633\u0628\u0639"}]} | CAMeL-Lab/bert-base-arabic-camelbert-ca-ner | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-CA NER Model
## Model description
CAMeLBERT-CA NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Classical Arabic (CA) model.
For the fine-tuning, we used the ANERcorp dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."
* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-CA NER model directly as part of our CAMeL Tools NER component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the CAMeL Tools NER component:
You can also use the NER model directly with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-CA NER Model",
"## Model description\nCAMeLBERT-CA NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Classical Arabic (CA) model.\nFor the fine-tuning, we used the ANERcorp dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"\n* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-CA NER model directly as part of our CAMeL Tools NER component (*recommended*) or as part of the transformers pipeline.",
"#### How to use\nTo use the model with the CAMeL Tools NER component:\n\nYou can also use the NER model directly with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-CA NER Model",
"## Model description\nCAMeLBERT-CA NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Classical Arabic (CA) model.\nFor the fine-tuning, we used the ANERcorp dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"\n* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-CA NER model directly as part of our CAMeL Tools NER component (*recommended*) or as part of the transformers pipeline.",
"#### How to use\nTo use the model with the CAMeL Tools NER component:\n\nYou can also use the NER model directly with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
text-classification | transformers | # CAMeLBERT-CA Poetry Classification Model
## Model description
**CAMeLBERT-CA Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Classical Arabic (CA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'],
['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'البسيط', 'score': 0.9845284819602966},
{'label': 'الكامل', 'score': 0.752918004989624}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0627\u0644\u062e\u064a\u0644 \u0648\u0627\u0644\u0644\u064a\u0644 \u0648\u0627\u0644\u0628\u064a\u062f\u0627\u0621 \u062a\u0639\u0631\u0641\u0646\u064a [SEP] \u0648\u0627\u0644\u0633\u064a\u0641 \u0648\u0627\u0644\u0631\u0645\u062d \u0648\u0627\u0644\u0642\u0631\u0637\u0627\u0633 \u0648\u0627\u0644\u0642\u0644\u0645"}]} | CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1905.05700",
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #text-classification #ar #arxiv-1905.05700 #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-CA Poetry Classification Model
## Model description
CAMeLBERT-CA Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Classical Arabic (CA) model.
For the fine-tuning, we used the APCD dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-CA Poetry Classification model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-CA Poetry Classification Model",
"## Model description\nCAMeLBERT-CA Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Classical Arabic (CA) model.\nFor the fine-tuning, we used the APCD dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-CA Poetry Classification model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #text-classification #ar #arxiv-1905.05700 #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-CA Poetry Classification Model",
"## Model description\nCAMeLBERT-CA Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Classical Arabic (CA) model.\nFor the fine-tuning, we used the APCD dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-CA Poetry Classification model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
token-classification | transformers | # CAMeLBERT-CA POS-EGY Model
## Model description
**CAMeLBERT-CA POS-EGY Model** is a Egyptian Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-CA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA POS-EGY model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy')
>>> text = 'عامل ايه ؟'
>>> pos(text)
[{'entity': 'adj', 'score': 0.9990943, 'index': 1, 'word': 'عامل', 'start': 0, 'end': 4}, {'entity': 'pron_interrog', 'score': 0.99863535, 'index': 2, 'word': 'ايه', 'start': 5, 'end': 8}, {'entity': 'punc', 'score': 0.99990875, 'index': 3, 'word': '؟', 'start': 9, 'end': 10}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0639\u0627\u0645\u0644 \u0627\u064a\u0647 \u061f"}]} | CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| # CAMeLBERT-CA POS-EGY Model
## Model description
CAMeLBERT-CA POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-CA model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-CA POS-EGY model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-CA POS-EGY Model",
"## Model description\nCAMeLBERT-CA POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-CA model.\nFor the fine-tuning, we used the ARZTB dataset .\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-CA POS-EGY model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# CAMeLBERT-CA POS-EGY Model",
"## Model description\nCAMeLBERT-CA POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-CA model.\nFor the fine-tuning, we used the ARZTB dataset .\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-CA POS-EGY model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
token-classification | transformers | # CAMeLBERT-CA POS-GLF Model
## Model description
**CAMeLBERT-CA POS-GLF Model** is a Gulf Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-CA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [Gumar](https://camel.abudhabi.nyu.edu/annotated-gumar-corpus/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA POS-GLF model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf')
>>> text = 'شلونك ؟ شخبارك ؟'
>>> pos(text)
[{'entity': 'noun', 'score': 0.99572617, 'index': 1, 'word': 'شلون', 'start': 0, 'end': 4}, {'entity': 'noun', 'score': 0.9411187, 'index': 2, 'word': '##ك', 'start': 4, 'end': 5}, {'entity': 'punc', 'score': 0.9999661, 'index': 3, 'word': '؟', 'start': 6, 'end': 7}, {'entity': 'noun', 'score': 0.99286526, 'index': 4, 'word': 'ش', 'start': 8, 'end': 9}, {'entity': 'noun', 'score': 0.9983397, 'index': 5, 'word': '##خبار', 'start': 9, 'end': 13}, {'entity': 'noun', 'score': 0.9609381, 'index': 6, 'word': '##ك', 'start': 13, 'end': 14}, {'entity': 'punc', 'score': 0.9999668, 'index': 7, 'word': '؟', 'start': 15, 'end': 16}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0634\u0644\u0648\u0646\u0643 \u061f \u0634\u062e\u0628\u0627\u0631\u0643 \u061f"}]} | CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-CA POS-GLF Model
## Model description
CAMeLBERT-CA POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-CA model.
For the fine-tuning, we used the Gumar dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."*
Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-CA POS-GLF model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-CA POS-GLF Model",
"## Model description\nCAMeLBERT-CA POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-CA model.\nFor the fine-tuning, we used the Gumar dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"*\nOur fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-CA POS-GLF model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-CA POS-GLF Model",
"## Model description\nCAMeLBERT-CA POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-CA model.\nFor the fine-tuning, we used the Gumar dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"*\nOur fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-CA POS-GLF model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
token-classification | transformers | # CAMeLBERT-CA POS-MSA Model
## Model description
**CAMeLBERT-CA POS-MSA Model** is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the [CAMeLBERT-CA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [PATB](https://dl.acm.org/doi/pdf/10.5555/1621804.1621808) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA POS-MSA model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa')
>>> text = 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع'
>>> pos(text)
[{'entity': 'noun', 'score': 0.9999758, 'index': 1, 'word': 'إمارة', 'start': 0, 'end': 5}, {'entity': 'noun_prop', 'score': 0.9997559, 'index': 2, 'word': 'أبوظبي', 'start': 6, 'end': 12}, {'entity': 'pron', 'score': 0.99996257, 'index': 3, 'word': 'هي', 'start': 13, 'end': 15}, {'entity': 'noun', 'score': 0.9958452, 'index': 4, 'word': 'إحدى', 'start': 16, 'end': 20}, {'entity': 'noun', 'score': 0.9999635, 'index': 5, 'word': 'إما', 'start': 21, 'end': 24}, {'entity': 'noun', 'score': 0.99991685, 'index': 6, 'word': '##رات', 'start': 24, 'end': 27}, {'entity': 'noun', 'score': 0.99997497, 'index': 7, 'word': 'دولة', 'start': 28, 'end': 32}, {'entity': 'noun', 'score': 0.9999795, 'index': 8, 'word': 'الإمارات', 'start': 33, 'end': 41}, {'entity': 'adj', 'score': 0.99924207, 'index': 9, 'word': 'العربية', 'start': 42, 'end': 49}, {'entity': 'adj', 'score': 0.99994195, 'index': 10, 'word': 'المتحدة', 'start': 50, 'end': 57}, {'entity': 'noun_num', 'score': 0.9997414, 'index': 11, 'word': 'السبع', 'start': 58, 'end': 63}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0625\u0645\u0627\u0631\u0629 \u0623\u0628\u0648\u0638\u0628\u064a \u0647\u064a \u0625\u062d\u062f\u0649 \u0625\u0645\u0627\u0631\u0627\u062a \u062f\u0648\u0644\u0629 \u0627\u0644\u0625\u0645\u0627\u0631\u0627\u062a \u0627\u0644\u0639\u0631\u0628\u064a\u0629 \u0627\u0644\u0645\u062a\u062d\u062f\u0629 \u0627\u0644\u0633\u0628\u0639"}]} | CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-CA POS-MSA Model
## Model description
CAMeLBERT-CA POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-CA model.
For the fine-tuning, we used the PATB dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-CA POS-MSA model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-CA POS-MSA Model",
"## Model description\nCAMeLBERT-CA POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-CA model.\nFor the fine-tuning, we used the PATB dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-CA POS-MSA model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-CA POS-MSA Model",
"## Model description\nCAMeLBERT-CA POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-CA model.\nFor the fine-tuning, we used the PATB dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-CA POS-MSA model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
text-classification | transformers | # CAMeLBERT-CA SA Model
## Model description
**CAMeLBERT-CA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Classical Arabic (CA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."
* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
e
>>> sa = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0623\u0646\u0627 \u0628\u062e\u064a\u0631"}]} | CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| # CAMeLBERT-CA SA Model
## Model description
CAMeLBERT-CA SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Classical Arabic (CA) model.
For the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."
* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-CA SA model directly as part of our CAMeL Tools SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the CAMeL Tools SA component:
You can also use the SA model directly with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-CA SA Model",
"## Model description\nCAMeLBERT-CA SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Classical Arabic (CA) model.\nFor the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"\n* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-CA SA model directly as part of our CAMeL Tools SA component (*recommended*) or as part of the transformers pipeline.",
"#### How to use\nTo use the model with the CAMeL Tools SA component:\n\nYou can also use the SA model directly with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# CAMeLBERT-CA SA Model",
"## Model description\nCAMeLBERT-CA SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Classical Arabic (CA) model.\nFor the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"\n* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-CA SA model directly as part of our CAMeL Tools SA component (*recommended*) or as part of the transformers pipeline.",
"#### How to use\nTo use the model with the CAMeL Tools SA component:\n\nYou can also use the SA model directly with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
fill-mask | transformers |
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-CA** (`bert-base-arabic-camelbert-ca`), a model pre-trained on the CA (classical Arabic) dataset.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
|✔|`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-ca')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.11048116534948349,
'token': 3696,
'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو الإسلام. [SEP]',
'score': 0.03481195122003555,
'token': 4677,
'token_str': 'الإسلام'},
{'sequence': '[CLS] الهدف من الحياة هو الموت. [SEP]',
'score': 0.03402028977870941,
'token': 4295,
'token_str': 'الموت'},
{'sequence': '[CLS] الهدف من الحياة هو العلم. [SEP]',
'score': 0.027655426412820816,
'token': 2789,
'token_str': 'العلم'},
{'sequence': '[CLS] الهدف من الحياة هو هذا. [SEP]',
'score': 0.023059621453285217,
'token': 2085,
'token_str': 'هذا'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-ca')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-ca')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-ca')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-ca')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- CA (classical Arabic)
- [OpenITI (Version 2020.1.2)](https://zenodo.org/record/3891466#.YEX4-F0zbzc)
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
| {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0627\u0644\u0647\u062f\u0641 \u0645\u0646 \u0627\u0644\u062d\u064a\u0627\u0629 \u0647\u0648 [MASK] ."}]} | CAMeL-Lab/bert-base-arabic-camelbert-ca | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
==================================================================
Model description
-----------------
CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."*
This model card describes CAMeLBERT-CA ('bert-base-arabic-camelbert-ca'), a model pre-trained on the CA (classical Arabic) dataset.
Intended uses
-------------
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here.
#### How to use
You can use this model directly with a pipeline for masked language modeling:
*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Training data
-------------
* CA (classical Arabic)
+ OpenITI (Version 2020.1.2)
Training procedure
------------------
We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
* After extracting the raw text from each corpus, we apply the following pre-processing.
* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.
* We also remove lines without any Arabic characters.
* We then remove diacritics and kashida using CAMeL Tools.
* Finally, we split each line into sentences with a heuristics-based sentence segmenter.
* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.
* We do not lowercase letters nor strip accents.
### Pre-training
* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.
* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
* We use whole word masking and a duplicate factor of 10.
* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
* The optimizer used is Adam with a learning rate of 1e-4, \(\beta\_{1} = 0.9\) and \(\beta\_{2} = 0.999\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
Evaluation results
------------------
* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
* We fine-tune and evaluate the models using 12 dataset.
* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.
* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
* We use \(F\_{1}\) score as a metric for all tasks.
* Code used for fine-tuning is available here.
### Results
### Results (Average)
[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.
Acknowledgements
----------------
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
| [
"#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* CA (classical Arabic)\n\t+ OpenITI (Version 2020.1.2)\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.",
"### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.",
"### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.",
"### Results",
"### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* CA (classical Arabic)\n\t+ OpenITI (Version 2020.1.2)\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.",
"### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.",
"### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.",
"### Results",
"### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] |
token-classification | transformers | # CAMeLBERT-DA NER Model
## Model description
**CAMeLBERT-DA NER Model** is a Named Entity Recognition (NER) model that was built by fine-tuning the [CAMeLBERT Dialectal Arabic (DA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."
* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA NER model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component:
```python
>>> from camel_tools.ner import NERecognizer
>>> from camel_tools.tokenizers.word import simple_word_tokenize
>>> ner = NERecognizer('CAMeL-Lab/bert-base-arabic-camelbert-da-ner')
>>> sentence = simple_word_tokenize('إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع')
>>> ner.predict_sentence(sentence)
>>> ['O', 'B-LOC', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'O']
```
You can also use the NER model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> ner = pipeline('ner', model='CAMeL-Lab/bert-base-arabic-camelbert-da-ner')
>>> ner("إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع")
[{'word': 'أبوظبي',
'score': 0.9895730018615723,
'entity': 'B-LOC',
'index': 2,
'start': 6,
'end': 12},
{'word': 'الإمارات',
'score': 0.8156259655952454,
'entity': 'B-LOC',
'index': 8,
'start': 33,
'end': 41},
{'word': 'العربية',
'score': 0.890906810760498,
'entity': 'I-LOC',
'index': 9,
'start': 42,
'end': 49},
{'word': 'المتحدة',
'score': 0.8169114589691162,
'entity': 'I-LOC',
'index': 10,
'start': 50,
'end': 57}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a da of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0625\u0645\u0627\u0631\u0629 \u0623\u0628\u0648\u0638\u0628\u064a \u0647\u064a \u0625\u062d\u062f\u0649 \u0625\u0645\u0627\u0631\u0627\u062a \u062f\u0648\u0644\u0629 \u0627\u0644\u0625\u0645\u0627\u0631\u0627\u062a \u0627\u0644\u0639\u0631\u0628\u064a\u0629 \u0627\u0644\u0645\u062a\u062d\u062f\u0629 \u0627\u0644\u0633\u0628\u0639"}]} | CAMeL-Lab/bert-base-arabic-camelbert-da-ner | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-DA NER Model
## Model description
CAMeLBERT-DA NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Dialectal Arabic (DA) model.
For the fine-tuning, we used the ANERcorp dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."
* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-DA NER model directly as part of our CAMeL Tools NER component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the CAMeL Tools NER component:
You can also use the NER model directly with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-DA NER Model",
"## Model description\nCAMeLBERT-DA NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Dialectal Arabic (DA) model.\nFor the fine-tuning, we used the ANERcorp dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"\n* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-DA NER model directly as part of our CAMeL Tools NER component (*recommended*) or as part of the transformers pipeline.",
"#### How to use\nTo use the model with the CAMeL Tools NER component:\n\nYou can also use the NER model directly with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-DA NER Model",
"## Model description\nCAMeLBERT-DA NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Dialectal Arabic (DA) model.\nFor the fine-tuning, we used the ANERcorp dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"\n* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-DA NER model directly as part of our CAMeL Tools NER component (*recommended*) or as part of the transformers pipeline.",
"#### How to use\nTo use the model with the CAMeL Tools NER component:\n\nYou can also use the NER model directly with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
text-classification | transformers | # CAMeLBERT-DA Poetry Classification Model
## Model description
**CAMeLBERT-DA Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Dialectal Arabic (DA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'],
['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'البسيط', 'score': 0.9874765276908875},
{'label': 'السلسلة', 'score': 0.6877778172492981}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0627\u0644\u062e\u064a\u0644 \u0648\u0627\u0644\u0644\u064a\u0644 \u0648\u0627\u0644\u0628\u064a\u062f\u0627\u0621 \u062a\u0639\u0631\u0641\u0646\u064a [SEP] \u0648\u0627\u0644\u0633\u064a\u0641 \u0648\u0627\u0644\u0631\u0645\u062d \u0648\u0627\u0644\u0642\u0631\u0637\u0627\u0633 \u0648\u0627\u0644\u0642\u0644\u0645"}]} | CAMeL-Lab/bert-base-arabic-camelbert-da-poetry | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1905.05700",
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #text-classification #ar #arxiv-1905.05700 #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-DA Poetry Classification Model
## Model description
CAMeLBERT-DA Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Dialectal Arabic (DA) model.
For the fine-tuning, we used the APCD dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-DA Poetry Classification model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-DA Poetry Classification Model",
"## Model description\nCAMeLBERT-DA Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Dialectal Arabic (DA) model.\nFor the fine-tuning, we used the APCD dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-DA Poetry Classification model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #text-classification #ar #arxiv-1905.05700 #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-DA Poetry Classification Model",
"## Model description\nCAMeLBERT-DA Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Dialectal Arabic (DA) model.\nFor the fine-tuning, we used the APCD dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-DA Poetry Classification model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
token-classification | transformers | # CAMeLBERT-DA POS-EGY Model
## Model description
**CAMeLBERT-DA POS-EGY Model** is a Egyptian Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-DA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA POS-EGY model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy')
>>> text = 'عامل ايه ؟'
>>> pos(text)
[{'entity': 'adj', 'score': 0.99843216, 'index': 1, 'word': 'عامل', 'start': 0, 'end': 4}, {'entity': 'pron_interrog', 'score': 0.9990083, 'index': 2, 'word': 'ايه', 'start': 5, 'end': 8}, {'entity': 'punc', 'score': 0.82973784, 'index': 3, 'word': '؟', 'start': 9, 'end': 10}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0639\u0627\u0645\u0644 \u0627\u064a\u0647 \u061f"}]} | CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-DA POS-EGY Model
## Model description
CAMeLBERT-DA POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-DA model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-DA POS-EGY model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-DA POS-EGY Model",
"## Model description\nCAMeLBERT-DA POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-DA model.\nFor the fine-tuning, we used the ARZTB dataset .\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-DA POS-EGY model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-DA POS-EGY Model",
"## Model description\nCAMeLBERT-DA POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-DA model.\nFor the fine-tuning, we used the ARZTB dataset .\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-DA POS-EGY model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
token-classification | transformers | # CAMeLBERT-DA POS-GLF Model
## Model description
**CAMeLBERT-DA POS-GLF Model** is a Gulf Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-DA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the [Gumar](https://camel.abudhabi.nyu.edu/annotated-gumar-corpus/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA POS-GLF model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf')
>>> text = 'شلونك ؟ شخبارك ؟'
>>> pos(text)
[{'entity': 'noun', 'score': 0.84596395, 'index': 1, 'word': 'شلون', 'start': 0, 'end': 4}, {'entity': 'prep', 'score': 0.7230489, 'index': 2, 'word': '##ك', 'start': 4, 'end': 5}, {'entity': 'punc', 'score': 0.99996364, 'index': 3, 'word': '؟', 'start': 6, 'end': 7}, {'entity': 'noun', 'score': 0.9990874, 'index': 4, 'word': 'ش', 'start': 8, 'end': 9}, {'entity': 'noun', 'score': 0.99985224, 'index': 5, 'word': '##خبار', 'start': 9, 'end': 13}, {'entity': 'noun', 'score': 0.9988868, 'index': 6, 'word': '##ك', 'start': 13, 'end': 14}, {'entity': 'punc', 'score': 0.9999683, 'index': 7, 'word': '؟', 'start': 15, 'end': 16}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0634\u0644\u0648\u0646\u0643 \u061f \u0634\u062e\u0628\u0627\u0631\u0643 \u061f"}]} | CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-DA POS-GLF Model
## Model description
CAMeLBERT-DA POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-DA model.
For the fine-tuning, we used the Gumar dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."*
Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-DA POS-GLF model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-DA POS-GLF Model",
"## Model description\nCAMeLBERT-DA POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-DA model.\nFor the fine-tuning, we used the Gumar dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"*\nOur fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-DA POS-GLF model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-DA POS-GLF Model",
"## Model description\nCAMeLBERT-DA POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-DA model.\nFor the fine-tuning, we used the Gumar dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"*\nOur fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-DA POS-GLF model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
token-classification | transformers | # CAMeLBERT-DA POS-MSA Model
## Model description
**CAMeLBERT-DA POS-MSA Model** is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the [CAMeLBERT-DA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the [PATB](https://dl.acm.org/doi/pdf/10.5555/1621804.1621808) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA POS-MSA model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa')
>>> text = 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع'
>>> pos(text)
[{'entity': 'noun', 'score': 0.9999913, 'index': 1, 'word': 'إمارة', 'start': 0, 'end': 5}, {'entity': 'noun_prop', 'score': 0.9992475, 'index': 2, 'word': 'أبوظبي', 'start': 6, 'end': 12}, {'entity': 'pron', 'score': 0.999919, 'index': 3, 'word': 'هي', 'start': 13, 'end': 15}, {'entity': 'noun', 'score': 0.99993193, 'index': 4, 'word': 'إحدى', 'start': 16, 'end': 20}, {'entity': 'noun', 'score': 0.99999106, 'index': 5, 'word': 'إما', 'start': 21, 'end': 24}, {'entity': 'noun', 'score': 0.99998987, 'index': 6, 'word': '##رات', 'start': 24, 'end': 27}, {'entity': 'noun', 'score': 0.9999933, 'index': 7, 'word': 'دولة', 'start': 28, 'end': 32}, {'entity': 'noun', 'score': 0.9999899, 'index': 8, 'word': 'الإمارات', 'start': 33, 'end': 41}, {'entity': 'adj', 'score': 0.99990565, 'index': 9, 'word': 'العربية', 'start': 42, 'end': 49}, {'entity': 'adj', 'score': 0.99997944, 'index': 10, 'word': 'المتحدة', 'start': 50, 'end': 57}, {'entity': 'noun_num', 'score': 0.99938935, 'index': 11, 'word': 'السبع', 'start': 58, 'end': 63}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0625\u0645\u0627\u0631\u0629 \u0623\u0628\u0648\u0638\u0628\u064a \u0647\u064a \u0625\u062d\u062f\u0649 \u0625\u0645\u0627\u0631\u0627\u062a \u062f\u0648\u0644\u0629 \u0627\u0644\u0625\u0645\u0627\u0631\u0627\u062a \u0627\u0644\u0639\u0631\u0628\u064a\u0629 \u0627\u0644\u0645\u062a\u062d\u062f\u0629 \u0627\u0644\u0633\u0628\u0639"}]} | CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-DA POS-MSA Model
## Model description
CAMeLBERT-DA POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-DA model.
For the fine-tuning, we used the PATB dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-DA POS-MSA model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-DA POS-MSA Model",
"## Model description\nCAMeLBERT-DA POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-DA model.\nFor the fine-tuning, we used the PATB dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-DA POS-MSA model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-DA POS-MSA Model",
"## Model description\nCAMeLBERT-DA POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-DA model.\nFor the fine-tuning, we used the PATB dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-DA POS-MSA model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
text-classification | transformers | # CAMeLBERT-DA SA Model
## Model description
**CAMeLBERT-DA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Dialectal Arabic (DA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."
* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> sa = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0623\u0646\u0627 \u0628\u062e\u064a\u0631"}]} | CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| # CAMeLBERT-DA SA Model
## Model description
CAMeLBERT-DA SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Dialectal Arabic (DA) model.
For the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."
* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-DA SA model directly as part of our CAMeL Tools SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the CAMeL Tools SA component:
You can also use the SA model directly with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-DA SA Model",
"## Model description\nCAMeLBERT-DA SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Dialectal Arabic (DA) model.\nFor the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"\n* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-DA SA model directly as part of our CAMeL Tools SA component (*recommended*) or as part of the transformers pipeline.",
"#### How to use\nTo use the model with the CAMeL Tools SA component:\n\nYou can also use the SA model directly with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# CAMeLBERT-DA SA Model",
"## Model description\nCAMeLBERT-DA SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Dialectal Arabic (DA) model.\nFor the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"\n* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-DA SA model directly as part of our CAMeL Tools SA component (*recommended*) or as part of the transformers pipeline.",
"#### How to use\nTo use the model with the CAMeL Tools SA component:\n\nYou can also use the SA model directly with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
fill-mask | transformers |
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-DA** (`bert-base-arabic-camelbert-da`), a model pre-trained on the DA (dialectal Arabic) dataset.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
||`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
|✔|`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-da')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو.. [SEP]',
'score': 0.062508225440979,
'token': 18,
'token_str': '.'},
{'sequence': '[CLS] الهدف من الحياة هو الموت. [SEP]',
'score': 0.033172328025102615,
'token': 4295,
'token_str': 'الموت'},
{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.029575437307357788,
'token': 3696,
'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو الرحيل. [SEP]',
'score': 0.02724040113389492,
'token': 11449,
'token_str': 'الرحيل'},
{'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]',
'score': 0.01564178802073002,
'token': 3088,
'token_str': 'الحب'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-da')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-da')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-da')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-da')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- DA (dialectal Arabic)
- A collection of dialectal Arabic data described in [our paper](https://arxiv.org/abs/2103.06678).
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
| {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0627\u0644\u0647\u062f\u0641 \u0645\u0646 \u0627\u0644\u062d\u064a\u0627\u0629 \u0647\u0648 [MASK] ."}]} | CAMeL-Lab/bert-base-arabic-camelbert-da | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
==================================================================
Model description
-----------------
CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."*
This model card describes CAMeLBERT-DA ('bert-base-arabic-camelbert-da'), a model pre-trained on the DA (dialectal Arabic) dataset.
Intended uses
-------------
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here.
#### How to use
You can use this model directly with a pipeline for masked language modeling:
*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Training data
-------------
* DA (dialectal Arabic)
+ A collection of dialectal Arabic data described in our paper.
Training procedure
------------------
We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
* After extracting the raw text from each corpus, we apply the following pre-processing.
* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.
* We also remove lines without any Arabic characters.
* We then remove diacritics and kashida using CAMeL Tools.
* Finally, we split each line into sentences with a heuristics-based sentence segmenter.
* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.
* We do not lowercase letters nor strip accents.
### Pre-training
* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.
* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
* We use whole word masking and a duplicate factor of 10.
* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
* The optimizer used is Adam with a learning rate of 1e-4, \(\beta\_{1} = 0.9\) and \(\beta\_{2} = 0.999\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
Evaluation results
------------------
* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
* We fine-tune and evaluate the models using 12 dataset.
* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.
* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
* We use \(F\_{1}\) score as a metric for all tasks.
* Code used for fine-tuning is available here.
### Results
### Results (Average)
[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.
Acknowledgements
----------------
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
| [
"#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* DA (dialectal Arabic)\n\t+ A collection of dialectal Arabic data described in our paper.\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.",
"### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.",
"### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.",
"### Results",
"### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* DA (dialectal Arabic)\n\t+ A collection of dialectal Arabic data described in our paper.\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.",
"### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.",
"### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.",
"### Results",
"### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] |
text-classification | transformers | # CAMeLBERT-Mix DID Madar Corpus26 Model
## Model description
**CAMeLBERT-Mix DID Madar Corpus26 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [MADAR Corpus 26](https://camel.abudhabi.nyu.edu/madar-shared-task-2019/) dataset, which includes 26 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix DID Madar Corpus26 model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar26')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'CAI', 'score': 0.8751305937767029},
{'label': 'DOH', 'score': 0.9867215156555176}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0639\u0627\u0645\u0644 \u0627\u064a\u0647 \u061f"}]} | CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26 | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-Mix DID Madar Corpus26 Model
## Model description
CAMeLBERT-Mix DID Madar Corpus26 Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-Mix model.
For the fine-tuning, we used the MADAR Corpus 26 dataset, which includes 26 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-Mix DID Madar Corpus26 model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-Mix DID Madar Corpus26 Model",
"## Model description\nCAMeLBERT-Mix DID Madar Corpus26 Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-Mix model.\nFor the fine-tuning, we used the MADAR Corpus 26 dataset, which includes 26 labels.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-Mix DID Madar Corpus26 model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-Mix DID Madar Corpus26 Model",
"## Model description\nCAMeLBERT-Mix DID Madar Corpus26 Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-Mix model.\nFor the fine-tuning, we used the MADAR Corpus 26 dataset, which includes 26 labels.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-Mix DID Madar Corpus26 model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
text-classification | transformers | # CAMeLBERT-Mix DID MADAR Corpus6 Model
## Model description
**CAMeLBERT-Mix DID MADAR Corpus6 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [MADAR Corpus 6](https://camel.abudhabi.nyu.edu/madar-shared-task-2019/) dataset, which includes 6 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix DID MADAR Corpus6 model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar6')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'CAI', 'score': 0.9996405839920044},
{'label': 'DOH', 'score': 0.9997853636741638}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0639\u0627\u0645\u0644 \u0627\u064a\u0647 \u061f"}]} | CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6 | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-Mix DID MADAR Corpus6 Model
## Model description
CAMeLBERT-Mix DID MADAR Corpus6 Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-Mix model.
For the fine-tuning, we used the MADAR Corpus 6 dataset, which includes 6 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-Mix DID MADAR Corpus6 model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models
| [
"# CAMeLBERT-Mix DID MADAR Corpus6 Model",
"## Model description\nCAMeLBERT-Mix DID MADAR Corpus6 Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-Mix model.\nFor the fine-tuning, we used the MADAR Corpus 6 dataset, which includes 6 labels.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-Mix DID MADAR Corpus6 model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models"
] | [
"TAGS\n#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-Mix DID MADAR Corpus6 Model",
"## Model description\nCAMeLBERT-Mix DID MADAR Corpus6 Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-Mix model.\nFor the fine-tuning, we used the MADAR Corpus 6 dataset, which includes 6 labels.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-Mix DID MADAR Corpus6 model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models"
] |
text-classification | transformers | # CAMeLBERT-Mix DID NADI Model
## Model description
**CAMeLBERT-Mix DID NADI Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [NADI Coountry-level](https://sites.google.com/view/nadi-shared-task) dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix DID NADI model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'Egypt', 'score': 0.920274019241333},
{'label': 'Saudi_Arabia', 'score': 0.26750022172927856}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0639\u0627\u0645\u0644 \u0627\u064a\u0647 \u061f"}]} | CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-Mix DID NADI Model
## Model description
CAMeLBERT-Mix DID NADI Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-Mix model.
For the fine-tuning, we used the NADI Coountry-level dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-Mix DID NADI model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-Mix DID NADI Model",
"## Model description\nCAMeLBERT-Mix DID NADI Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-Mix model.\nFor the fine-tuning, we used the NADI Coountry-level dataset, which includes 21 labels.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-Mix DID NADI model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-Mix DID NADI Model",
"## Model description\nCAMeLBERT-Mix DID NADI Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-Mix model.\nFor the fine-tuning, we used the NADI Coountry-level dataset, which includes 21 labels.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-Mix DID NADI model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
token-classification | transformers | # CAMeLBERT-Mix NER Model
## Model description
**CAMeLBERT-Mix NER Model** is a Named Entity Recognition (NER) model that was built by fine-tuning the [CAMeLBERT Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678).
"* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix NER model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component:
```python
>>> from camel_tools.ner import NERecognizer
>>> from camel_tools.tokenizers.word import simple_word_tokenize
>>> ner = NERecognizer('CAMeL-Lab/bert-base-arabic-camelbert-mix-ner')
>>> sentence = simple_word_tokenize('إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع')
>>> ner.predict_sentence(sentence)
>>> ['O', 'B-LOC', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'O']
```
You can also use the NER model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> ner = pipeline('ner', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-ner')
>>> ner("إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع")
[{'word': 'أبوظبي',
'score': 0.9895730018615723,
'entity': 'B-LOC',
'index': 2,
'start': 6,
'end': 12},
{'word': 'الإمارات',
'score': 0.8156259655952454,
'entity': 'B-LOC',
'index': 8,
'start': 33,
'end': 41},
{'word': 'العربية',
'score': 0.890906810760498,
'entity': 'I-LOC',
'index': 9,
'start': 42,
'end': 49},
{'word': 'المتحدة',
'score': 0.8169114589691162,
'entity': 'I-LOC',
'index': 10,
'start': 50,
'end': 57}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0625\u0645\u0627\u0631\u0629 \u0623\u0628\u0648\u0638\u0628\u064a \u0647\u064a \u0625\u062d\u062f\u0649 \u0625\u0645\u0627\u0631\u0627\u062a \u062f\u0648\u0644\u0629 \u0627\u0644\u0625\u0645\u0627\u0631\u0627\u062a \u0627\u0644\u0639\u0631\u0628\u064a\u0629 \u0627\u0644\u0645\u062a\u062d\u062f\u0629 \u0627\u0644\u0633\u0628\u0639"}]} | CAMeL-Lab/bert-base-arabic-camelbert-mix-ner | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| # CAMeLBERT-Mix NER Model
## Model description
CAMeLBERT-Mix NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Mix model.
For the fine-tuning, we used the ANERcorp dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.
"* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-Mix NER model directly as part of our CAMeL Tools NER component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the CAMeL Tools NER component:
You can also use the NER model directly with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-Mix NER Model",
"## Model description\nCAMeLBERT-Mix NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Mix model.\nFor the fine-tuning, we used the ANERcorp dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\n\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-Mix NER model directly as part of our CAMeL Tools NER component (*recommended*) or as part of the transformers pipeline.",
"#### How to use\nTo use the model with the CAMeL Tools NER component:\n\nYou can also use the NER model directly with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# CAMeLBERT-Mix NER Model",
"## Model description\nCAMeLBERT-Mix NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Mix model.\nFor the fine-tuning, we used the ANERcorp dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\n\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-Mix NER model directly as part of our CAMeL Tools NER component (*recommended*) or as part of the transformers pipeline.",
"#### How to use\nTo use the model with the CAMeL Tools NER component:\n\nYou can also use the NER model directly with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
text-classification | transformers | # CAMeLBERT-Mix Poetry Classification Model
## Model description
**CAMeLBERT-Mix Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'],
['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'البسيط', 'score': 0.9937475919723511},
{'label': 'الكامل', 'score': 0.971284031867981}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0627\u0644\u062e\u064a\u0644 \u0648\u0627\u0644\u0644\u064a\u0644 \u0648\u0627\u0644\u0628\u064a\u062f\u0627\u0621 \u062a\u0639\u0631\u0641\u0646\u064a [SEP] \u0648\u0627\u0644\u0633\u064a\u0641 \u0648\u0627\u0644\u0631\u0645\u062d \u0648\u0627\u0644\u0642\u0631\u0637\u0627\u0633 \u0648\u0627\u0644\u0642\u0644\u0645"}]} | CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1905.05700",
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #text-classification #ar #arxiv-1905.05700 #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-Mix Poetry Classification Model
## Model description
CAMeLBERT-Mix Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Mix model.
For the fine-tuning, we used the APCD dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-Mix Poetry Classification model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-Mix Poetry Classification Model",
"## Model description\nCAMeLBERT-Mix Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Mix model.\nFor the fine-tuning, we used the APCD dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-Mix Poetry Classification model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #text-classification #ar #arxiv-1905.05700 #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-Mix Poetry Classification Model",
"## Model description\nCAMeLBERT-Mix Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Mix model.\nFor the fine-tuning, we used the APCD dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-Mix Poetry Classification model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
token-classification | transformers | # CAMeLBERT-Mix POS-EGY Model
## Model description
**CAMeLBERT-Mix POS-EGY Model** is a Egyptian Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix POS-EGY model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy')
>>> text = 'عامل ايه ؟'
>>> pos(text)
[{'entity': 'adj', 'score': 0.9972628, 'index': 1, 'word': 'عامل', 'start': 0, 'end': 4}, {'entity': 'pron_interrog', 'score': 0.9525163, 'index': 2, 'word': 'ايه', 'start': 5, 'end': 8}, {'entity': 'punc', 'score': 0.99869114, 'index': 3, 'word': '؟', 'start': 9, 'end': 10}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0639\u0627\u0645\u0644 \u0627\u064a\u0647 \u061f"}]} | CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-Mix POS-EGY Model
## Model description
CAMeLBERT-Mix POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-Mix model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-Mix POS-EGY model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-Mix POS-EGY Model",
"## Model description\nCAMeLBERT-Mix POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-Mix model.\nFor the fine-tuning, we used the ARZTB dataset .\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-Mix POS-EGY model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-Mix POS-EGY Model",
"## Model description\nCAMeLBERT-Mix POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-Mix model.\nFor the fine-tuning, we used the ARZTB dataset .\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-Mix POS-EGY model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
token-classification | transformers | # CAMeLBERT-Mix POS-GLF Model
## Model description
**CAMeLBERT-Mix POS-GLF Model** is a Gulf Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [Gumar](https://camel.abudhabi.nyu.edu/annotated-gumar-corpus/) dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix POS-GLF model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf')
>>> text = 'شلونك ؟ شخبارك ؟'
>>> pos(text)
[{'entity': 'pron_interrog', 'score': 0.82657206, 'index': 1, 'word': 'شلون', 'start': 0, 'end': 4}, {'entity': 'prep', 'score': 0.9771731, 'index': 2, 'word': '##ك', 'start': 4, 'end': 5}, {'entity': 'punc', 'score': 0.9999568, 'index': 3, 'word': '؟', 'start': 6, 'end': 7}, {'entity': 'noun', 'score': 0.9977217, 'index': 4, 'word': 'ش', 'start': 8, 'end': 9}, {'entity': 'noun', 'score': 0.99993783, 'index': 5, 'word': '##خبار', 'start': 9, 'end': 13}, {'entity': 'prep', 'score': 0.5309442, 'index': 6, 'word': '##ك', 'start': 13, 'end': 14}, {'entity': 'punc', 'score': 0.9999575, 'index': 7, 'word': '؟', 'start': 15, 'end': 16}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0634\u0644\u0648\u0646\u0643 \u061f \u0634\u062e\u0628\u0627\u0631\u0643 \u061f"}]} | CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-Mix POS-GLF Model
## Model description
CAMeLBERT-Mix POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-Mix model.
For the fine-tuning, we used the Gumar dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-Mix POS-GLF model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-Mix POS-GLF Model",
"## Model description\nCAMeLBERT-Mix POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-Mix model.\nFor the fine-tuning, we used the Gumar dataset .\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-Mix POS-GLF model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-Mix POS-GLF Model",
"## Model description\nCAMeLBERT-Mix POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-Mix model.\nFor the fine-tuning, we used the Gumar dataset .\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-Mix POS-GLF model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
token-classification | transformers | # CAMeLBERT-Mix POS-MSA Model
## Model description
**CAMeLBERT-Mix POS-MSA Model** is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [PATB](https://dl.acm.org/doi/pdf/10.5555/1621804.1621808) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix POS-MSA model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa')
>>> text = 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع'
>>> pos(text)
[{'entity': 'noun', 'score': 0.9999592, 'index': 1, 'word': 'إمارة', 'start': 0, 'end': 5}, {'entity': 'noun_prop', 'score': 0.9997877, 'index': 2, 'word': 'أبوظبي', 'start': 6, 'end': 12}, {'entity': 'pron', 'score': 0.9998405, 'index': 3, 'word': 'هي', 'start': 13, 'end': 15}, {'entity': 'noun', 'score': 0.9697179, 'index': 4, 'word': 'إحدى', 'start': 16, 'end': 20}, {'entity': 'noun', 'score': 0.99967164, 'index': 5, 'word': 'إما', 'start': 21, 'end': 24}, {'entity': 'noun', 'score': 0.99980617, 'index': 6, 'word': '##رات', 'start': 24, 'end': 27}, {'entity': 'noun', 'score': 0.99997973, 'index': 7, 'word': 'دولة', 'start': 28, 'end': 32}, {'entity': 'noun', 'score': 0.99995637, 'index': 8, 'word': 'الإمارات', 'start': 33, 'end': 41}, {'entity': 'adj', 'score': 0.9983974, 'index': 9, 'word': 'العربية', 'start': 42, 'end': 49}, {'entity': 'adj', 'score': 0.9999469, 'index': 10, 'word': 'المتحدة', 'start': 50, 'end': 57}, {'entity': 'noun_num', 'score': 0.9993273, 'index': 11, 'word': 'السبع', 'start': 58, 'end': 63}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0625\u0645\u0627\u0631\u0629 \u0623\u0628\u0648\u0638\u0628\u064a \u0647\u064a \u0625\u062d\u062f\u0649 \u0625\u0645\u0627\u0631\u0627\u062a \u062f\u0648\u0644\u0629 \u0627\u0644\u0625\u0645\u0627\u0631\u0627\u062a \u0627\u0644\u0639\u0631\u0628\u064a\u0629 \u0627\u0644\u0645\u062a\u062d\u062f\u0629 \u0627\u0644\u0633\u0628\u0639"}]} | CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-Mix POS-MSA Model
## Model description
CAMeLBERT-Mix POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-Mix model.
For the fine-tuning, we used the PATB dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-Mix POS-MSA model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-Mix POS-MSA Model",
"## Model description\nCAMeLBERT-Mix POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-Mix model.\nFor the fine-tuning, we used the PATB dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-Mix POS-MSA model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-Mix POS-MSA Model",
"## Model description\nCAMeLBERT-Mix POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-Mix model.\nFor the fine-tuning, we used the PATB dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-Mix POS-MSA model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
text-classification | transformers | # CAMeLBERT Mix SA Model
## Model description
**CAMeLBERT Mix SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT Mix SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> sa = pipeline('sentiment-analysis', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0623\u0646\u0627 \u0628\u062e\u064a\u0631"}]} | CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT Mix SA Model
## Model description
CAMeLBERT Mix SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Mix model.
For the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT Mix SA model directly as part of our CAMeL Tools SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the CAMeL Tools SA component:
You can also use the SA model directly with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT Mix SA Model",
"## Model description\nCAMeLBERT Mix SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Mix model.\nFor the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT Mix SA model directly as part of our CAMeL Tools SA component (*recommended*) or as part of the transformers pipeline.",
"#### How to use\nTo use the model with the CAMeL Tools SA component:\n\nYou can also use the SA model directly with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT Mix SA Model",
"## Model description\nCAMeLBERT Mix SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Mix model.\nFor the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT Mix SA model directly as part of our CAMeL Tools SA component (*recommended*) or as part of the transformers pipeline.",
"#### How to use\nTo use the model with the CAMeL Tools SA component:\n\nYou can also use the SA model directly with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
fill-mask | transformers |
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-Mix** (`bert-base-arabic-camelbert-mix`), a model pre-trained on a mixture of these variants: MSA, DA, and CA.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
|✔|`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
||`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-mix')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]',
'score': 0.10861027985811234,
'token': 6232,
'token_str': 'النجاح'},
{'sequence': '[CLS] الهدف من الحياة هو.. [SEP]',
'score': 0.07626965641975403,
'token': 18,
'token_str': '.'},
{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.05131986364722252,
'token': 3696,
'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو الموت. [SEP]',
'score': 0.03734956309199333,
'token': 4295,
'token_str': 'الموت'},
{'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]',
'score': 0.027189988642930984,
'token': 2854,
'token_str': 'العمل'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-mix')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-mix')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-mix')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-mix')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- MSA (Modern Standard Arabic)
- [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11)
- [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus)
- [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian)
- [Arabic Wikipedia](https://archive.org/details/arwiki-20190201)
- The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/)
- DA (dialectal Arabic)
- A collection of dialectal Arabic data described in [our paper](https://arxiv.org/abs/2103.06678).
- CA (classical Arabic)
- [OpenITI (Version 2020.1.2)](https://zenodo.org/record/3891466#.YEX4-F0zbzc)
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
| {"language": ["ar"], "license": "apache-2.0", "tags": ["Arabic", "Dialect", "Egyptian", "Gulf", "Levantine", "Classical Arabic", "MSA", "Modern Standard Arabic"], "widget": [{"text": "\u0627\u0644\u0647\u062f\u0641 \u0645\u0646 \u0627\u0644\u062d\u064a\u0627\u0629 \u0647\u0648 [MASK] ."}]} | CAMeL-Lab/bert-base-arabic-camelbert-mix | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"Arabic",
"Dialect",
"Egyptian",
"Gulf",
"Levantine",
"Classical Arabic",
"MSA",
"Modern Standard Arabic",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #Arabic #Dialect #Egyptian #Gulf #Levantine #Classical Arabic #MSA #Modern Standard Arabic #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
==================================================================
Model description
-----------------
CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."*
This model card describes CAMeLBERT-Mix ('bert-base-arabic-camelbert-mix'), a model pre-trained on a mixture of these variants: MSA, DA, and CA.
Intended uses
-------------
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here.
#### How to use
You can use this model directly with a pipeline for masked language modeling:
*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Training data
-------------
* MSA (Modern Standard Arabic)
+ The Arabic Gigaword Fifth Edition
+ Abu El-Khair Corpus
+ OSIAN corpus
+ Arabic Wikipedia
+ The unshuffled version of the Arabic OSCAR corpus
* DA (dialectal Arabic)
+ A collection of dialectal Arabic data described in our paper.
* CA (classical Arabic)
+ OpenITI (Version 2020.1.2)
Training procedure
------------------
We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
* After extracting the raw text from each corpus, we apply the following pre-processing.
* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.
* We also remove lines without any Arabic characters.
* We then remove diacritics and kashida using CAMeL Tools.
* Finally, we split each line into sentences with a heuristics-based sentence segmenter.
* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.
* We do not lowercase letters nor strip accents.
### Pre-training
* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.
* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
* We use whole word masking and a duplicate factor of 10.
* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
* The optimizer used is Adam with a learning rate of 1e-4, \(\beta\_{1} = 0.9\) and \(\beta\_{2} = 0.999\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
Evaluation results
------------------
* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
* We fine-tune and evaluate the models using 12 dataset.
* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.
* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
* We use \(F\_{1}\) score as a metric for all tasks.
* Code used for fine-tuning is available here.
### Results
### Results (Average)
[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.
Acknowledgements
----------------
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
| [
"#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* MSA (Modern Standard Arabic)\n\t+ The Arabic Gigaword Fifth Edition\n\t+ Abu El-Khair Corpus\n\t+ OSIAN corpus\n\t+ Arabic Wikipedia\n\t+ The unshuffled version of the Arabic OSCAR corpus\n* DA (dialectal Arabic)\n\t+ A collection of dialectal Arabic data described in our paper.\n* CA (classical Arabic)\n\t+ OpenITI (Version 2020.1.2)\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.",
"### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.",
"### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.",
"### Results",
"### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #Arabic #Dialect #Egyptian #Gulf #Levantine #Classical Arabic #MSA #Modern Standard Arabic #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* MSA (Modern Standard Arabic)\n\t+ The Arabic Gigaword Fifth Edition\n\t+ Abu El-Khair Corpus\n\t+ OSIAN corpus\n\t+ Arabic Wikipedia\n\t+ The unshuffled version of the Arabic OSCAR corpus\n* DA (dialectal Arabic)\n\t+ A collection of dialectal Arabic data described in our paper.\n* CA (classical Arabic)\n\t+ OpenITI (Version 2020.1.2)\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.",
"### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.",
"### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.",
"### Results",
"### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] |
text-classification | transformers | # CAMeLBERT-MSA DID MADAR Twitter-5 Model
## Model description
**CAMeLBERT-MSA DID MADAR Twitter-5 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [MADAR Twitter-5](https://camel.abudhabi.nyu.edu/madar-shared-task-2019/) dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA DID MADAR Twitter-5 model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'Egypt', 'score': 0.5741344094276428},
{'label': 'Kuwait', 'score': 0.5225679278373718}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0639\u0627\u0645\u0644 \u0627\u064a\u0647 \u061f"}]} | CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5 | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-MSA DID MADAR Twitter-5 Model
## Model description
CAMeLBERT-MSA DID MADAR Twitter-5 Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-MSA model.
For the fine-tuning, we used the MADAR Twitter-5 dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-MSA DID MADAR Twitter-5 model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-MSA DID MADAR Twitter-5 Model",
"## Model description\nCAMeLBERT-MSA DID MADAR Twitter-5 Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-MSA model.\nFor the fine-tuning, we used the MADAR Twitter-5 dataset, which includes 21 labels.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-MSA DID MADAR Twitter-5 model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-MSA DID MADAR Twitter-5 Model",
"## Model description\nCAMeLBERT-MSA DID MADAR Twitter-5 Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-MSA model.\nFor the fine-tuning, we used the MADAR Twitter-5 dataset, which includes 21 labels.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-MSA DID MADAR Twitter-5 model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
text-classification | transformers | # CAMeLBERT-MSA DID NADI Model
## Model description
**CAMeLBERT-MSA DID NADI Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [NADI Coountry-level](https://sites.google.com/view/nadi-shared-task) dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA DID NADI model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'Egypt', 'score': 0.9242768287658691},
{'label': 'Saudi_Arabia', 'score': 0.3400847613811493}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0639\u0627\u0645\u0644 \u0627\u064a\u0647 \u061f"}]} | CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-MSA DID NADI Model
## Model description
CAMeLBERT-MSA DID NADI Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.
For the fine-tuning, we used the NADI Coountry-level dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-MSA DID NADI model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-MSA DID NADI Model",
"## Model description\nCAMeLBERT-MSA DID NADI Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.\nFor the fine-tuning, we used the NADI Coountry-level dataset, which includes 21 labels.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-MSA DID NADI model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #text-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-MSA DID NADI Model",
"## Model description\nCAMeLBERT-MSA DID NADI Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.\nFor the fine-tuning, we used the NADI Coountry-level dataset, which includes 21 labels.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-MSA DID NADI model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
fill-mask | transformers |
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-MSA-eighth** (`bert-base-arabic-camelbert-msa-eighth`), a model pre-trained on an eighth of the full MSA dataset.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
||`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
|✔|`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.057812128216028214,
'token': 3696,
'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]',
'score': 0.05573025345802307,
'token': 6232,
'token_str': 'النجاح'},
{'sequence': '[CLS] الهدف من الحياة هو الكمال. [SEP]',
'score': 0.035942986607551575,
'token': 17188,
'token_str': 'الكمال'},
{'sequence': '[CLS] الهدف من الحياة هو التعلم. [SEP]',
'score': 0.03375256434082985,
'token': 12554,
'token_str': 'التعلم'},
{'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]',
'score': 0.030303971841931343,
'token': 2854,
'token_str': 'العمل'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- MSA (Modern Standard Arabic)
- [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11)
- [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus)
- [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian)
- [Arabic Wikipedia](https://archive.org/details/arwiki-20190201)
- The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/)
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
| {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0627\u0644\u0647\u062f\u0641 \u0645\u0646 \u0627\u0644\u062d\u064a\u0627\u0629 \u0647\u0648 [MASK] ."}]} | CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
==================================================================
Model description
-----------------
CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."*
This model card describes CAMeLBERT-MSA-eighth ('bert-base-arabic-camelbert-msa-eighth'), a model pre-trained on an eighth of the full MSA dataset.
Intended uses
-------------
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here.
#### How to use
You can use this model directly with a pipeline for masked language modeling:
*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Training data
-------------
* MSA (Modern Standard Arabic)
+ The Arabic Gigaword Fifth Edition
+ Abu El-Khair Corpus
+ OSIAN corpus
+ Arabic Wikipedia
+ The unshuffled version of the Arabic OSCAR corpus
Training procedure
------------------
We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
* After extracting the raw text from each corpus, we apply the following pre-processing.
* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.
* We also remove lines without any Arabic characters.
* We then remove diacritics and kashida using CAMeL Tools.
* Finally, we split each line into sentences with a heuristics-based sentence segmenter.
* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.
* We do not lowercase letters nor strip accents.
### Pre-training
* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.
* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
* We use whole word masking and a duplicate factor of 10.
* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
* The optimizer used is Adam with a learning rate of 1e-4, \(\beta\_{1} = 0.9\) and \(\beta\_{2} = 0.999\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
Evaluation results
------------------
* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
* We fine-tune and evaluate the models using 12 dataset.
* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.
* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
* We use \(F\_{1}\) score as a metric for all tasks.
* Code used for fine-tuning is available here.
### Results
### Results (Average)
[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.
Acknowledgements
----------------
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
| [
"#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* MSA (Modern Standard Arabic)\n\t+ The Arabic Gigaword Fifth Edition\n\t+ Abu El-Khair Corpus\n\t+ OSIAN corpus\n\t+ Arabic Wikipedia\n\t+ The unshuffled version of the Arabic OSCAR corpus\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.",
"### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.",
"### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.",
"### Results",
"### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* MSA (Modern Standard Arabic)\n\t+ The Arabic Gigaword Fifth Edition\n\t+ Abu El-Khair Corpus\n\t+ OSIAN corpus\n\t+ Arabic Wikipedia\n\t+ The unshuffled version of the Arabic OSCAR corpus\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.",
"### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.",
"### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.",
"### Results",
"### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] |
fill-mask | transformers |
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-MSA-half** (`bert-base-arabic-camelbert-msa-half`), a model pre-trained on a half of the full MSA dataset.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
||`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
|✔|`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-half')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.09132730215787888,
'token': 3696,
'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو.. [SEP]',
'score': 0.08282623440027237,
'token': 18,
'token_str': '.'},
{'sequence': '[CLS] الهدف من الحياة هو البقاء. [SEP]',
'score': 0.04031957685947418,
'token': 9331,
'token_str': 'البقاء'},
{'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]',
'score': 0.032019514590501785,
'token': 6232,
'token_str': 'النجاح'},
{'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]',
'score': 0.028731243684887886,
'token': 3088,
'token_str': 'الحب'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-half')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-half')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-half')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-half')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- MSA (Modern Standard Arabic)
- [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11)
- [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus)
- [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian)
- [Arabic Wikipedia](https://archive.org/details/arwiki-20190201)
- The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/)
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
| {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0627\u0644\u0647\u062f\u0641 \u0645\u0646 \u0627\u0644\u062d\u064a\u0627\u0629 \u0647\u0648 [MASK] ."}]} | CAMeL-Lab/bert-base-arabic-camelbert-msa-half | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
==================================================================
Model description
-----------------
CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."*
This model card describes CAMeLBERT-MSA-half ('bert-base-arabic-camelbert-msa-half'), a model pre-trained on a half of the full MSA dataset.
Intended uses
-------------
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here.
#### How to use
You can use this model directly with a pipeline for masked language modeling:
*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
Training data
-------------
* MSA (Modern Standard Arabic)
+ The Arabic Gigaword Fifth Edition
+ Abu El-Khair Corpus
+ OSIAN corpus
+ Arabic Wikipedia
+ The unshuffled version of the Arabic OSCAR corpus
Training procedure
------------------
We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
* After extracting the raw text from each corpus, we apply the following pre-processing.
* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.
* We also remove lines without any Arabic characters.
* We then remove diacritics and kashida using CAMeL Tools.
* Finally, we split each line into sentences with a heuristics-based sentence segmenter.
* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.
* We do not lowercase letters nor strip accents.
### Pre-training
* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.
* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
* We use whole word masking and a duplicate factor of 10.
* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
* The optimizer used is Adam with a learning rate of 1e-4, \(\beta\_{1} = 0.9\) and \(\beta\_{2} = 0.999\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
Evaluation results
------------------
* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
* We fine-tune and evaluate the models using 12 dataset.
* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.
* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
* We use \(F\_{1}\) score as a metric for all tasks.
* Code used for fine-tuning is available here.
### Results
### Results (Average)
[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.
Acknowledgements
----------------
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
| [
"#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* MSA (Modern Standard Arabic)\n\t+ The Arabic Gigaword Fifth Edition\n\t+ Abu El-Khair Corpus\n\t+ OSIAN corpus\n\t+ Arabic Wikipedia\n\t+ The unshuffled version of the Arabic OSCAR corpus\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.",
"### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.",
"### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.",
"### Results",
"### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'. Otherwise, you could download the models manually.\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\n* MSA (Modern Standard Arabic)\n\t+ The Arabic Gigaword Fifth Edition\n\t+ Abu El-Khair Corpus\n\t+ OSIAN corpus\n\t+ Arabic Wikipedia\n\t+ The unshuffled version of the Arabic OSCAR corpus\n\n\nTraining procedure\n------------------\n\n\nWe use the original implementation released by Google for pre-training.\nWe follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.",
"### Preprocessing\n\n\n* After extracting the raw text from each corpus, we apply the following pre-processing.\n* We first remove invalid characters and normalize white spaces using the utilities provided by the original BERT implementation.\n* We also remove lines without any Arabic characters.\n* We then remove diacritics and kashida using CAMeL Tools.\n* Finally, we split each line into sentences with a heuristics-based sentence segmenter.\n* We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using HuggingFace's tokenizers.\n* We do not lowercase letters nor strip accents.",
"### Pre-training\n\n\n* The model was trained on a single cloud TPU ('v3-8') for one million steps in total.\n* The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.\n* The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.\n* We use whole word masking and a duplicate factor of 10.\n* We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.\n* We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.\n* The optimizer used is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\n* We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.\n* We fine-tune and evaluate the models using 12 dataset.\n* We used Hugging Face's transformers to fine-tune our CAMeLBERT models.\n* We used transformers 'v3.1.0' along with PyTorch 'v1.5.1'.\n* The fine-tuning was done by adding a fully connected linear layer to the last hidden state.\n* We use \\(F\\_{1}\\) score as a metric for all tasks.\n* Code used for fine-tuning is available here.",
"### Results",
"### Results (Average)\n\n\n\n[1]: Variant-wise-average refers to average over a group of tasks in the same language variant.\n\n\nAcknowledgements\n----------------\n\n\nThis research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC)."
] |
token-classification | transformers | # CAMeLBERT MSA NER Model
## Model description
**CAMeLBERT MSA NER Model** is a Named Entity Recognition (NER) model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678).
"* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT MSA NER model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component:
```python
>>> from camel_tools.ner import NERecognizer
>>> from camel_tools.tokenizers.word import simple_word_tokenize
>>> ner = NERecognizer('CAMeL-Lab/bert-base-arabic-camelbert-msa-ner')
>>> sentence = simple_word_tokenize('إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع')
>>> ner.predict_sentence(sentence)
>>> ['O', 'B-LOC', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'O']
```
You can also use the NER model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> ner = pipeline('ner', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-ner')
>>> ner("إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع")
[{'word': 'أبوظبي',
'score': 0.9895730018615723,
'entity': 'B-LOC',
'index': 2,
'start': 6,
'end': 12},
{'word': 'الإمارات',
'score': 0.8156259655952454,
'entity': 'B-LOC',
'index': 8,
'start': 33,
'end': 41},
{'word': 'العربية',
'score': 0.890906810760498,
'entity': 'I-LOC',
'index': 9,
'start': 42,
'end': 49},
{'word': 'المتحدة',
'score': 0.8169114589691162,
'entity': 'I-LOC',
'index': 10,
'start': 50,
'end': 57}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
| {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0625\u0645\u0627\u0631\u0629 \u0623\u0628\u0648\u0638\u0628\u064a \u0647\u064a \u0625\u062d\u062f\u0649 \u0625\u0645\u0627\u0631\u0627\u062a \u062f\u0648\u0644\u0629 \u0627\u0644\u0625\u0645\u0627\u0631\u0627\u062a \u0627\u0644\u0639\u0631\u0628\u064a\u0629 \u0627\u0644\u0645\u062a\u062d\u062f\u0629 \u0627\u0644\u0633\u0628\u0639"}]} | CAMeL-Lab/bert-base-arabic-camelbert-msa-ner | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| # CAMeLBERT MSA NER Model
## Model description
CAMeLBERT MSA NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.
For the fine-tuning, we used the ANERcorp dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.
"* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT MSA NER model directly as part of our CAMeL Tools NER component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the CAMeL Tools NER component:
You can also use the NER model directly with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT MSA NER Model",
"## Model description\nCAMeLBERT MSA NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.\nFor the fine-tuning, we used the ANERcorp dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\n\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT MSA NER model directly as part of our CAMeL Tools NER component (*recommended*) or as part of the transformers pipeline.",
"#### How to use\nTo use the model with the CAMeL Tools NER component:\n\n\nYou can also use the NER model directly with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# CAMeLBERT MSA NER Model",
"## Model description\nCAMeLBERT MSA NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.\nFor the fine-tuning, we used the ANERcorp dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\n\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT MSA NER model directly as part of our CAMeL Tools NER component (*recommended*) or as part of the transformers pipeline.",
"#### How to use\nTo use the model with the CAMeL Tools NER component:\n\n\nYou can also use the NER model directly with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
text-classification | transformers | # CAMeLBERT-MSA Poetry Classification Model
## Model description
**CAMeLBERT-MSA Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'],
['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'البسيط', 'score': 0.9914996027946472},
{'label': 'الكامل', 'score': 0.917242169380188}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0627\u0644\u062e\u064a\u0644 \u0648\u0627\u0644\u0644\u064a\u0644 \u0648\u0627\u0644\u0628\u064a\u062f\u0627\u0621 \u062a\u0639\u0631\u0641\u0646\u064a [SEP] \u0648\u0627\u0644\u0633\u064a\u0641 \u0648\u0627\u0644\u0631\u0645\u062d \u0648\u0627\u0644\u0642\u0631\u0637\u0627\u0633 \u0648\u0627\u0644\u0642\u0644\u0645"}]} | CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1905.05700",
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #text-classification #ar #arxiv-1905.05700 #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-MSA Poetry Classification Model
## Model description
CAMeLBERT-MSA Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.
For the fine-tuning, we used the APCD dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-MSA Poetry Classification model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-MSA Poetry Classification Model",
"## Model description\nCAMeLBERT-MSA Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.\nFor the fine-tuning, we used the APCD dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-MSA Poetry Classification model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #text-classification #ar #arxiv-1905.05700 #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-MSA Poetry Classification Model",
"## Model description\nCAMeLBERT-MSA Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.\nFor the fine-tuning, we used the APCD dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-MSA Poetry Classification model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
token-classification | transformers | # CAMeLBERT-MSA POS-EGY Model
## Model description
**CAMeLBERT-MSA POS-EGY Model** is a Egyptian Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA POS-EGY model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-egy')
>>> text = 'عامل ايه ؟'
>>> pos(text)
[{'entity': 'adj', 'score': 0.99979395, 'index': 1, 'word': 'عامل', 'start': 0, 'end': 4}, {'entity': 'pron_interrog', 'score': 0.998192, 'index': 2, 'word': 'ايه', 'start': 5, 'end': 8}, {'entity': 'punc', 'score': 0.99929804, 'index': 3, 'word': '؟', 'start': 9, 'end': 10}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0639\u0627\u0645\u0644 \u0627\u064a\u0647 \u061f"}]} | CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-egy | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-MSA POS-EGY Model
## Model description
CAMeLBERT-MSA POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-MSA model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."* Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-MSA POS-EGY model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-MSA POS-EGY Model",
"## Model description\nCAMeLBERT-MSA POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-MSA model.\nFor the fine-tuning, we used the ARZTB dataset .\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-MSA POS-EGY model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-MSA POS-EGY Model",
"## Model description\nCAMeLBERT-MSA POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-MSA model.\nFor the fine-tuning, we used the ARZTB dataset .\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"* Our fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-MSA POS-EGY model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |
token-classification | transformers | # CAMeLBERT-MSA POS-GLF Model
## Model description
**CAMeLBERT-MSA POS-GLF Model** is a Gulf Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [Gumar](https://camel.abudhabi.nyu.edu/annotated-gumar-corpus/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA POS-GLF model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf')
>>> text = 'شلونك ؟ شخبارك ؟'
>>> pos(text)
[{'entity': 'adv_interrog', 'score': 0.5622676, 'index': 1, 'word': 'شلون', 'start': 0, 'end': 4}, {'entity': 'prep', 'score': 0.99969727, 'index': 2, 'word': '##ك', 'start': 4, 'end': 5}, {'entity': 'punc', 'score': 0.9999299, 'index': 3, 'word': '؟', 'start': 6, 'end': 7}, {'entity': 'noun', 'score': 0.9843815, 'index': 4, 'word': 'ش', 'start': 8, 'end': 9}, {'entity': 'noun', 'score': 0.9998467, 'index': 5, 'word': '##خبار', 'start': 9, 'end': 13}, {'entity': 'prep', 'score': 0.9993611, 'index': 6, 'word': '##ك', 'start': 13, 'end': 14}, {'entity': 'punc', 'score': 0.99993765, 'index': 7, 'word': '؟', 'start': 15, 'end': 16}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | {"language": ["ar"], "license": "apache-2.0", "widget": [{"text": "\u0634\u0644\u0648\u0646\u0643 \u061f \u0634\u062e\u0628\u0627\u0631\u0643 \u061f"}]} | CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2103.06678"
] | [
"ar"
] | TAGS
#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # CAMeLBERT-MSA POS-GLF Model
## Model description
CAMeLBERT-MSA POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-MSA model.
For the fine-tuning, we used the Gumar dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."*
Our fine-tuning code can be found here.
## Intended uses
You can use the CAMeLBERT-MSA POS-GLF model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon.
#### How to use
To use the model with a transformers pipeline:
*Note*: to download our models, you would need 'transformers>=3.5.0'.
Otherwise, you could download the models manually.
| [
"# CAMeLBERT-MSA POS-GLF Model",
"## Model description\nCAMeLBERT-MSA POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-MSA model.\nFor the fine-tuning, we used the Gumar dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"*\nOur fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-MSA POS-GLF model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] | [
"TAGS\n#transformers #pytorch #tf #bert #token-classification #ar #arxiv-2103.06678 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CAMeLBERT-MSA POS-GLF Model",
"## Model description\nCAMeLBERT-MSA POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-MSA model.\nFor the fine-tuning, we used the Gumar dataset.\nOur fine-tuning procedure and the hyperparameters we used can be found in our paper *\"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.\"*\nOur fine-tuning code can be found here.",
"## Intended uses\nYou can use the CAMeLBERT-MSA POS-GLF model as part of the transformers pipeline.\nThis model will also be available in CAMeL Tools soon.",
"#### How to use\nTo use the model with a transformers pipeline:\n\n*Note*: to download our models, you would need 'transformers>=3.5.0'.\nOtherwise, you could download the models manually."
] |