pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
text2text-generation
transformers
**Usage HuggingFace Transformers for question generation task** ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("AlekseyKulnevich/Pegasus-QuestionGeneration") tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large') input_text # your text input_ = tokenizer.batch_encode_plus([input_text], max_length=1024, pad_to_max_length=True, truncation=True, padding='longest', return_tensors='pt') input_ids = input_['input_ids'] input_mask = input_['attention_mask'] questions = model.generate(input_ids=input_ids, attention_mask=input_mask, num_beams=32, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=10) questions = tokenizer.batch_decode(questions, skip_special_tokens=True) ``` **Decoder configuration examples:** [**Input text you can see here**](https://www.bbc.com/news/science-environment-59775105) ``` questions = model.generate(input_ids=input_ids, attention_mask=input_mask, num_beams=32, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=10) tokenizer.batch_decode(questions, skip_special_tokens=True) ``` output: 1. *What is the impact of human induced climate change on tropical cyclones?* 2. *What is the impact of climate change on tropical cyclones?* 3. *What is the impact of human induced climate change on tropical cyclone formation?* 4. *How many tropical cyclones will occur in the mid-latitudes?* 5. *What is the impact of climate change on the formation of tropical cyclones?* 6. *Is it possible for a tropical cyclone to form in the middle latitudes?* 7. *How many tropical cyclones will be formed in the mid-latitudes?* 8. *How many tropical cyclones will there be in the mid-latitudes?* 9. *How many tropical cyclones will form in the mid-latitudes?* 10. *What is the impact of global warming on tropical cyclones?* 11. *How long does it take for a tropical cyclone to form?* 12. 'What are the impacts of climate change on tropical cyclones?* 13. *What are the effects of climate change on tropical cyclones?* 14. *How many tropical cyclones will be able to form in the middle latitudes?* 15. *What is the impact of climate change on tropical cyclone formation?* 16. *What is the effect of climate change on tropical cyclones?* 17. *How long does it take for a tropical cyclone to form in the middle latitude?* 18. *How many tropical cyclones will occur in the middle latitudes?* 19. *How many tropical cyclones are likely to form in the midlatitudes?* 20. *How many tropical cyclones are likely to form in the middle latitudes?* 21. *How many tropical cyclones are expected to form in the midlatitudes?* 22. *How many tropical cyclones will be formed in the middle latitudes?* 23. *How many tropical cyclones will there be in the middle latitudes?* 24. *How long will it take for a tropical cyclone to form in the middle latitude?* 25. *What is the impact of global warming on tropical cyclone formation?* 26. *How many tropical cyclones will form in the middle latitudes?* 27. *How many tropical cyclones can we expect to form in the middle latitudes?* 28. *Is it possible for a tropical cyclone to form in the middle latitude?* 29. *What is the effect of climate change on tropical cyclone formation?* 30. *What are the effects of climate change on tropical cyclone formation?* Also you can play with the following parameters in generate method: -top_k -top_p [**Meaning of parameters to generate text you can see here**](https://huggingface.co/blog/how-to-generate)
{}
AlekseyKulnevich/Pegasus-QuestionGeneration
null
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
Usage HuggingFace Transformers for question generation task Decoder configuration examples: Input text you can see here output: 1. *What is the impact of human induced climate change on tropical cyclones?* 2. *What is the impact of climate change on tropical cyclones?* 3. *What is the impact of human induced climate change on tropical cyclone formation?* 4. *How many tropical cyclones will occur in the mid-latitudes?* 5. *What is the impact of climate change on the formation of tropical cyclones?* 6. *Is it possible for a tropical cyclone to form in the middle latitudes?* 7. *How many tropical cyclones will be formed in the mid-latitudes?* 8. *How many tropical cyclones will there be in the mid-latitudes?* 9. *How many tropical cyclones will form in the mid-latitudes?* 10. *What is the impact of global warming on tropical cyclones?* 11. *How long does it take for a tropical cyclone to form?* 12. 'What are the impacts of climate change on tropical cyclones?* 13. *What are the effects of climate change on tropical cyclones?* 14. *How many tropical cyclones will be able to form in the middle latitudes?* 15. *What is the impact of climate change on tropical cyclone formation?* 16. *What is the effect of climate change on tropical cyclones?* 17. *How long does it take for a tropical cyclone to form in the middle latitude?* 18. *How many tropical cyclones will occur in the middle latitudes?* 19. *How many tropical cyclones are likely to form in the midlatitudes?* 20. *How many tropical cyclones are likely to form in the middle latitudes?* 21. *How many tropical cyclones are expected to form in the midlatitudes?* 22. *How many tropical cyclones will be formed in the middle latitudes?* 23. *How many tropical cyclones will there be in the middle latitudes?* 24. *How long will it take for a tropical cyclone to form in the middle latitude?* 25. *What is the impact of global warming on tropical cyclone formation?* 26. *How many tropical cyclones will form in the middle latitudes?* 27. *How many tropical cyclones can we expect to form in the middle latitudes?* 28. *Is it possible for a tropical cyclone to form in the middle latitude?* 29. *What is the effect of climate change on tropical cyclone formation?* 30. *What are the effects of climate change on tropical cyclone formation?* Also you can play with the following parameters in generate method: -top_k -top_p Meaning of parameters to generate text you can see here
[]
[ "TAGS\n#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n" ]
text2text-generation
transformers
**Usage HuggingFace Transformers for summarization task** ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("AlekseyKulnevich/Pegasus-Summarization") tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large') input_text # your text input_ = tokenizer.batch_encode_plus([input_text], max_length=1024, pad_to_max_length=True, truncation=True, padding='longest', return_tensors='pt') input_ids = input_['input_ids'] input_mask = input_['attention_mask'] summary = model.generate(input_ids=input_ids, attention_mask=input_mask, num_beams=32, min_length=100, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=10) questions = tokenizer.batch_decode(summary, skip_special_tokens=True) ``` **Decoder configuration examples:** [**Input text you can see here**](https://www.bbc.com/news/science-environment-59775105) ``` summary = model.generate(input_ids=input_ids, attention_mask=input_mask, num_beams=32, min_length=100, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=1) tokenizer.batch_decode(summary, skip_special_tokens=True) ``` output: 1. *global warming will expand the range of tropical cyclones in the mid-latitudes of the world, according to a new study published by the Intergovernmental Panel on Climate change (IPCC) and the US National Oceanic and Atmospheric Administration (NOAA) The study shows that a warming climate will allow more of these types of storms to form over a wider range than they have been able to do over the past three million years. "As the climate warms, it's likely that these storms will become more frequent and more intense," said the authors of this study.* ``` summary = model.generate(input_ids=input_ids, attention_mask=input_mask, top_k=30, no_repeat_ngram_size=2, early_stopping=True, min_length=100, num_return_sequences=1) tokenizer.batch_decode(summary, skip_special_tokens=True) ``` output: 1. *tropical cyclones in the mid-latitudes of the world will likely form more of these types of storms, according to a new study published by the Intergovernmental Panel on Climate change (IPCC) on the impact of human induced climate change on these storms. The study shows that a warming climate will increase the likelihood of a subtropical cyclone forming over a wider range of latitudes, including the equator, than it has been for the past three million years, and that it will be more likely to form over the tropics.* Also you can play with the following parameters in generate method: -top_k -top_p [**Meaning of parameters to generate text you can see here**](https://huggingface.co/blog/how-to-generate)
{}
AlekseyKulnevich/Pegasus-Summarization
null
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
Usage HuggingFace Transformers for summarization task Decoder configuration examples: Input text you can see here output: 1. *global warming will expand the range of tropical cyclones in the mid-latitudes of the world, according to a new study published by the Intergovernmental Panel on Climate change (IPCC) and the US National Oceanic and Atmospheric Administration (NOAA) The study shows that a warming climate will allow more of these types of storms to form over a wider range than they have been able to do over the past three million years. "As the climate warms, it's likely that these storms will become more frequent and more intense," said the authors of this study.* output: 1. *tropical cyclones in the mid-latitudes of the world will likely form more of these types of storms, according to a new study published by the Intergovernmental Panel on Climate change (IPCC) on the impact of human induced climate change on these storms. The study shows that a warming climate will increase the likelihood of a subtropical cyclone forming over a wider range of latitudes, including the equator, than it has been for the past three million years, and that it will be more likely to form over the tropics.* Also you can play with the following parameters in generate method: -top_k -top_p Meaning of parameters to generate text you can see here
[]
[ "TAGS\n#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
transformers
This is a fine-tuned version of GPT-2, trained with the entire corpus of Plato's works. By generating text samples you should be able to generate ancient Greek philosophy on the fly!
{"language": "en", "tags": ["text-generation"], "pipeline_tag": "text-generation", "widget": [{"text": "The Gods"}, {"text": "What is"}]}
Alerosae/SocratesGPT-2
null
[ "transformers", "pytorch", "gpt2", "feature-extraction", "text-generation", "en", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #gpt2 #feature-extraction #text-generation #en #endpoints_compatible #text-generation-inference #region-us
This is a fine-tuned version of GPT-2, trained with the entire corpus of Plato's works. By generating text samples you should be able to generate ancient Greek philosophy on the fly!
[]
[ "TAGS\n#transformers #pytorch #gpt2 #feature-extraction #text-generation #en #endpoints_compatible #text-generation-inference #region-us \n" ]
question-answering
transformers
# XLM-RoBERTa large model whole word masking finetuned on SQuAD Pretrained model using a masked language modeling (MLM) objective. Fine tuned on English and Russian QA datasets ## Used QA Datasets SQuAD + SberQuAD [SberQuAD original paper](https://arxiv.org/pdf/1912.09723.pdf) is here! Recommend to read! ## Evaluation results The results obtained are the following (SberQUaD): ``` f1 = 84.3 exact_match = 65.3
{"language": ["en", "ru", "multilingual"], "license": "apache-2.0"}
AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru
null
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "en", "ru", "multilingual", "arxiv:1912.09723", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1912.09723" ]
[ "en", "ru", "multilingual" ]
TAGS #transformers #pytorch #xlm-roberta #question-answering #en #ru #multilingual #arxiv-1912.09723 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# XLM-RoBERTa large model whole word masking finetuned on SQuAD Pretrained model using a masked language modeling (MLM) objective. Fine tuned on English and Russian QA datasets ## Used QA Datasets SQuAD + SberQuAD SberQuAD original paper is here! Recommend to read! ## Evaluation results The results obtained are the following (SberQUaD): ''' f1 = 84.3 exact_match = 65.3
[ "# XLM-RoBERTa large model whole word masking finetuned on SQuAD\nPretrained model using a masked language modeling (MLM) objective. \nFine tuned on English and Russian QA datasets", "## Used QA Datasets\nSQuAD + SberQuAD\n\nSberQuAD original paper is here! Recommend to read!", "## Evaluation results\nThe results obtained are the following (SberQUaD):\n'''\nf1 = 84.3\nexact_match = 65.3" ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #question-answering #en #ru #multilingual #arxiv-1912.09723 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# XLM-RoBERTa large model whole word masking finetuned on SQuAD\nPretrained model using a masked language modeling (MLM) objective. \nFine tuned on English and Russian QA datasets", "## Used QA Datasets\nSQuAD + SberQuAD\n\nSberQuAD original paper is here! Recommend to read!", "## Evaluation results\nThe results obtained are the following (SberQUaD):\n'''\nf1 = 84.3\nexact_match = 65.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentence-compression-roberta This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3465 - Accuracy: 0.8473 - F1: 0.6835 - Precision: 0.6835 - Recall: 0.6835 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.5312 | 1.0 | 50 | 0.5251 | 0.7591 | 0.0040 | 0.75 | 0.0020 | | 0.4 | 2.0 | 100 | 0.4003 | 0.8200 | 0.5341 | 0.7113 | 0.4275 | | 0.3355 | 3.0 | 150 | 0.3465 | 0.8473 | 0.6835 | 0.6835 | 0.6835 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "model-index": [{"name": "sentence-compression-roberta", "results": []}]}
AlexMaclean/sentence-compression-roberta
null
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
sentence-compression-roberta ============================ This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.3465 * Accuracy: 0.8473 * F1: 0.6835 * Precision: 0.6835 * Recall: 0.6835 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu113 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu113\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu113\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentence-compression This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2973 - Accuracy: 0.8912 - F1: 0.8367 - Precision: 0.8495 - Recall: 0.8243 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.2686 | 1.0 | 10000 | 0.2667 | 0.8894 | 0.8283 | 0.8725 | 0.7884 | | 0.2205 | 2.0 | 20000 | 0.2704 | 0.8925 | 0.8372 | 0.8579 | 0.8175 | | 0.1476 | 3.0 | 30000 | 0.2973 | 0.8912 | 0.8367 | 0.8495 | 0.8243 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "model-index": [{"name": "sentence-compression", "results": []}]}
AlexMaclean/sentence-compression
null
[ "transformers", "pytorch", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
sentence-compression ==================== This model is a fine-tuned version of distilbert-base-cased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.2973 * Accuracy: 0.8912 * F1: 0.8367 * Precision: 0.8495 * Recall: 0.8243 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu113 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu113\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu113\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset. It achieves the following results on the evaluation set: - Loss: 0.2388 - Wer: 0.3681 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 4.3748 | 0.07 | 500 | 3.8784 | 1.0 | | 2.8068 | 0.14 | 1000 | 2.8289 | 0.9826 | | 1.6698 | 0.22 | 1500 | 0.8811 | 0.7127 | | 1.3488 | 0.29 | 2000 | 0.5166 | 0.5369 | | 1.2239 | 0.36 | 2500 | 0.4105 | 0.4741 | | 1.1537 | 0.43 | 3000 | 0.3585 | 0.4448 | | 1.1184 | 0.51 | 3500 | 0.3336 | 0.4292 | | 1.0968 | 0.58 | 4000 | 0.3195 | 0.4180 | | 1.0737 | 0.65 | 4500 | 0.3075 | 0.4141 | | 1.0677 | 0.72 | 5000 | 0.3015 | 0.4089 | | 1.0462 | 0.8 | 5500 | 0.2971 | 0.4077 | | 1.0392 | 0.87 | 6000 | 0.2870 | 0.3997 | | 1.0178 | 0.94 | 6500 | 0.2805 | 0.3963 | | 0.992 | 1.01 | 7000 | 0.2748 | 0.3935 | | 1.0197 | 1.09 | 7500 | 0.2691 | 0.3884 | | 1.0056 | 1.16 | 8000 | 0.2682 | 0.3889 | | 0.9826 | 1.23 | 8500 | 0.2647 | 0.3868 | | 0.9815 | 1.3 | 9000 | 0.2603 | 0.3832 | | 0.9717 | 1.37 | 9500 | 0.2561 | 0.3807 | | 0.9605 | 1.45 | 10000 | 0.2523 | 0.3783 | | 0.96 | 1.52 | 10500 | 0.2494 | 0.3788 | | 0.9442 | 1.59 | 11000 | 0.2478 | 0.3760 | | 0.9564 | 1.66 | 11500 | 0.2454 | 0.3733 | | 0.9436 | 1.74 | 12000 | 0.2439 | 0.3747 | | 0.938 | 1.81 | 12500 | 0.2411 | 0.3716 | | 0.9353 | 1.88 | 13000 | 0.2397 | 0.3698 | | 0.9271 | 1.95 | 13500 | 0.2388 | 0.3681 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
{"language": ["fr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "xls-r-300m-fr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 8.0 fr", "type": "mozilla-foundation/common_voice_8_0", "args": "fr"}, "metrics": [{"type": "wer", "value": 36.81, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 35.55, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 39.94, "name": "Test WER"}]}]}]}
AlexN/xls-r-300m-fr-0
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "fr", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #fr #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - FR dataset. It achieves the following results on the evaluation set: * Loss: 0.2388 * Wer: 0.3681 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1500 * num\_epochs: 2.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.2.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1500\n* num\\_epochs: 2.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #fr #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1500\n* num\\_epochs: 2.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2700 - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
{"language": ["fr"], "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "xls-r-300m-fr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 8.0 fr", "type": "mozilla-foundation/common_voice_8_0", "args": "fr"}, "metrics": [{"type": "wer", "value": 21.58, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 36.03, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 38.86, "name": "Test WER"}]}]}]}
AlexN/xls-r-300m-fr
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "fr", "dataset:mozilla-foundation/common_voice_8_0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #fr #dataset-mozilla-foundation/common_voice_8_0 #model-index #endpoints_compatible #region-us
# This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2700 - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
[ "# \n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2700\n- num_epochs: 1.0\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.2.dev0\n- Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #fr #dataset-mozilla-foundation/common_voice_8_0 #model-index #endpoints_compatible #region-us \n", "# \n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2700\n- num_epochs: 1.0\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.2.dev0\n- Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PT dataset. It achieves the following results on the evaluation set: - Loss: 0.2290 - Wer: 0.2382 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.0952 | 0.64 | 500 | 3.0982 | 1.0 | | 1.7975 | 1.29 | 1000 | 0.7887 | 0.5651 | | 1.4138 | 1.93 | 1500 | 0.5238 | 0.4389 | | 1.344 | 2.57 | 2000 | 0.4775 | 0.4318 | | 1.2737 | 3.21 | 2500 | 0.4648 | 0.4075 | | 1.2554 | 3.86 | 3000 | 0.4069 | 0.3678 | | 1.1996 | 4.5 | 3500 | 0.3914 | 0.3668 | | 1.1427 | 5.14 | 4000 | 0.3694 | 0.3572 | | 1.1372 | 5.78 | 4500 | 0.3568 | 0.3501 | | 1.0831 | 6.43 | 5000 | 0.3331 | 0.3253 | | 1.1074 | 7.07 | 5500 | 0.3332 | 0.3352 | | 1.0536 | 7.71 | 6000 | 0.3131 | 0.3152 | | 1.0248 | 8.35 | 6500 | 0.3024 | 0.3023 | | 1.0075 | 9.0 | 7000 | 0.2948 | 0.3028 | | 0.979 | 9.64 | 7500 | 0.2796 | 0.2853 | | 0.9594 | 10.28 | 8000 | 0.2719 | 0.2789 | | 0.9172 | 10.93 | 8500 | 0.2620 | 0.2695 | | 0.9047 | 11.57 | 9000 | 0.2537 | 0.2596 | | 0.8777 | 12.21 | 9500 | 0.2438 | 0.2525 | | 0.8629 | 12.85 | 10000 | 0.2409 | 0.2493 | | 0.8575 | 13.5 | 10500 | 0.2366 | 0.2440 | | 0.8361 | 14.14 | 11000 | 0.2317 | 0.2385 | | 0.8126 | 14.78 | 11500 | 0.2290 | 0.2382 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
{"language": ["pt"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "robust-speech-event", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "xls-r-300m-pt", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 8.0 pt", "type": "mozilla-foundation/common_voice_8_0", "args": "pt"}, "metrics": [{"type": "wer", "value": 19.361, "name": "Test WER"}, {"type": "cer", "value": 5.533, "name": "Test CER"}, {"type": "wer", "value": 19.36, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 47.812, "name": "Validation WER"}, {"type": "cer", "value": 18.805, "name": "Validation CER"}, {"type": "wer", "value": 48.01, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "pt"}, "metrics": [{"type": "wer", "value": 49.21, "name": "Test WER"}]}]}]}
AlexN/xls-r-300m-pt
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "robust-speech-event", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hf-asr-leaderboard", "pt", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "pt" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #robust-speech-event #mozilla-foundation/common_voice_8_0 #generated_from_trainer #hf-asr-leaderboard #pt #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - PT dataset. It achieves the following results on the evaluation set: * Loss: 0.2290 * Wer: 0.2382 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1500 * num\_epochs: 15.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.2.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1500\n* num\\_epochs: 15.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #robust-speech-event #mozilla-foundation/common_voice_8_0 #generated_from_trainer #hf-asr-leaderboard #pt #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1500\n* num\\_epochs: 15.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cola This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.7552 - Matthews Correlation: 0.5495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model_index": [{"name": "cola", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE COLA", "type": "glue", "args": "cola"}, "metric": {"name": "Matthews Correlation", "type": "matthews_correlation", "value": 0.5494768667363472}}]}]}
Alireza1044/albert-base-v2-cola
null
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# cola This model is a fine-tuned version of albert-base-v2 on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.7552 - Matthews Correlation: 0.5495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
[ "# cola\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE COLA dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7552\n- Matthews Correlation: 0.5495", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Training results", "### Framework versions\n\n- Transformers 4.9.0\n- Pytorch 1.9.0+cu102\n- Datasets 1.10.2\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# cola\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE COLA dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7552\n- Matthews Correlation: 0.5495", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Training results", "### Framework versions\n\n- Transformers 4.9.0\n- Pytorch 1.9.0+cu102\n- Datasets 1.10.2\n- Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mnli This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.5383 - Accuracy: 0.8501 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "mnli", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE MNLI", "type": "glue", "args": "mnli"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.8500813669650122}}]}]}
Alireza1044/albert-base-v2-mnli
null
[ "transformers", "pytorch", "albert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# mnli This model is a fine-tuned version of albert-base-v2 on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.5383 - Accuracy: 0.8501 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
[ "# mnli\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE MNLI dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.5383\n- Accuracy: 0.8501", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 64\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Training results", "### Framework versions\n\n- Transformers 4.9.1\n- Pytorch 1.9.0+cu102\n- Datasets 1.10.2\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# mnli\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE MNLI dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.5383\n- Accuracy: 0.8501", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 64\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Training results", "### Framework versions\n\n- Transformers 4.9.1\n- Pytorch 1.9.0+cu102\n- Datasets 1.10.2\n- Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mrpc This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.4171 - Accuracy: 0.8627 - F1: 0.9011 - Combined Score: 0.8819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model_index": [{"name": "mrpc", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE MRPC", "type": "glue", "args": "mrpc"}, "metric": {"name": "F1", "type": "f1", "value": 0.901060070671378}}]}]}
Alireza1044/albert-base-v2-mrpc
null
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# mrpc This model is a fine-tuned version of albert-base-v2 on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.4171 - Accuracy: 0.8627 - F1: 0.9011 - Combined Score: 0.8819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
[ "# mrpc\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE MRPC dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.4171\n- Accuracy: 0.8627\n- F1: 0.9011\n- Combined Score: 0.8819", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Training results", "### Framework versions\n\n- Transformers 4.9.0\n- Pytorch 1.9.0+cu102\n- Datasets 1.10.2\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# mrpc\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE MRPC dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.4171\n- Accuracy: 0.8627\n- F1: 0.9011\n- Combined Score: 0.8819", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Training results", "### Framework versions\n\n- Transformers 4.9.0\n- Pytorch 1.9.0+cu102\n- Datasets 1.10.2\n- Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qnli This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3608 - Accuracy: 0.9138 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "qnli", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE QNLI", "type": "glue", "args": "qnli"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9137836353651839}}]}]}
Alireza1044/albert-base-v2-qnli
null
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# qnli This model is a fine-tuned version of albert-base-v2 on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3608 - Accuracy: 0.9138 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
[ "# qnli\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE QNLI dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3608\n- Accuracy: 0.9138", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Training results", "### Framework versions\n\n- Transformers 4.9.1\n- Pytorch 1.9.0+cu102\n- Datasets 1.10.2\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# qnli\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE QNLI dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3608\n- Accuracy: 0.9138", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Training results", "### Framework versions\n\n- Transformers 4.9.1\n- Pytorch 1.9.0+cu102\n- Datasets 1.10.2\n- Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qqp This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.3695 - Accuracy: 0.9050 - F1: 0.8723 - Combined Score: 0.8886 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model_index": [{"name": "qqp", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE QQP", "type": "glue", "args": "qqp"}, "metric": {"name": "F1", "type": "f1", "value": 0.8722569490623753}}]}]}
Alireza1044/albert-base-v2-qqp
null
[ "transformers", "pytorch", "albert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# qqp This model is a fine-tuned version of albert-base-v2 on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.3695 - Accuracy: 0.9050 - F1: 0.8723 - Combined Score: 0.8886 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
[ "# qqp\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE QQP dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3695\n- Accuracy: 0.9050\n- F1: 0.8723\n- Combined Score: 0.8886", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 64\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Training results", "### Framework versions\n\n- Transformers 4.9.1\n- Pytorch 1.9.0+cu102\n- Datasets 1.10.2\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# qqp\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE QQP dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3695\n- Accuracy: 0.9050\n- F1: 0.8723\n- Combined Score: 0.8886", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 64\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Training results", "### Framework versions\n\n- Transformers 4.9.1\n- Pytorch 1.9.0+cu102\n- Datasets 1.10.2\n- Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rte This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.7994 - Accuracy: 0.6859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "rte", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE RTE", "type": "glue", "args": "rte"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.6859205776173285}}]}]}
Alireza1044/albert-base-v2-rte
null
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# rte This model is a fine-tuned version of albert-base-v2 on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.7994 - Accuracy: 0.6859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
[ "# rte\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE RTE dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7994\n- Accuracy: 0.6859", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Training results", "### Framework versions\n\n- Transformers 4.9.0\n- Pytorch 1.9.0+cu102\n- Datasets 1.10.2\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# rte\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE RTE dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7994\n- Accuracy: 0.6859", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Training results", "### Framework versions\n\n- Transformers 4.9.0\n- Pytorch 1.9.0+cu102\n- Datasets 1.10.2\n- Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sst2 This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.3808 - Accuracy: 0.9232 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "sst2", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE SST2", "type": "glue", "args": "sst2"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9231651376146789}}]}]}
Alireza1044/albert-base-v2-sst2
null
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# sst2 This model is a fine-tuned version of albert-base-v2 on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.3808 - Accuracy: 0.9232 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
[ "# sst2\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE SST2 dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3808\n- Accuracy: 0.9232", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Training results", "### Framework versions\n\n- Transformers 4.9.0\n- Pytorch 1.9.0+cu102\n- Datasets 1.10.2\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# sst2\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE SST2 dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3808\n- Accuracy: 0.9232", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Training results", "### Framework versions\n\n- Transformers 4.9.0\n- Pytorch 1.9.0+cu102\n- Datasets 1.10.2\n- Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # stsb This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 0.3978 - Pearson: 0.9090 - Spearmanr: 0.9051 - Combined Score: 0.9071 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["spearmanr"], "model_index": [{"name": "stsb", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE STSB", "type": "glue", "args": "stsb"}, "metric": {"name": "Spearmanr", "type": "spearmanr", "value": 0.9050744778895732}}]}]}
Alireza1044/albert-base-v2-stsb
null
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# stsb This model is a fine-tuned version of albert-base-v2 on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 0.3978 - Pearson: 0.9090 - Spearmanr: 0.9051 - Combined Score: 0.9071 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
[ "# stsb\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE STSB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3978\n- Pearson: 0.9090\n- Spearmanr: 0.9051\n- Combined Score: 0.9071", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 64\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Training results", "### Framework versions\n\n- Transformers 4.9.0\n- Pytorch 1.9.0+cu102\n- Datasets 1.10.2\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# stsb\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE STSB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3978\n- Pearson: 0.9090\n- Spearmanr: 0.9051\n- Combined Score: 0.9071", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 64\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Training results", "### Framework versions\n\n- Transformers 4.9.0\n- Pytorch 1.9.0+cu102\n- Datasets 1.10.2\n- Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wnli This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6898 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "wnli", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE WNLI", "type": "glue", "args": "wnli"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.5633802816901409}}]}]}
Alireza1044/albert-base-v2-wnli
null
[ "transformers", "pytorch", "albert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# wnli This model is a fine-tuned version of albert-base-v2 on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6898 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
[ "# wnli\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE WNLI dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6898\n- Accuracy: 0.5634", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Training results", "### Framework versions\n\n- Transformers 4.9.1\n- Pytorch 1.9.0+cu102\n- Datasets 1.10.2\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# wnli\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE WNLI dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6898\n- Accuracy: 0.5634", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Training results", "### Framework versions\n\n- Transformers 4.9.1\n- Pytorch 1.9.0+cu102\n- Datasets 1.10.2\n- Tokenizers 0.10.3" ]
text-classification
transformers
A simple model trained on dialogues of characters in NBC series, `The Office`. The model can do a binary classification between `Michael Scott` and `Dwight Shrute`'s dialogues. <style type="text/css"> .tg {border-collapse:collapse;border-spacing:0;} .tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; overflow:hidden;padding:10px 5px;word-break:normal;} .tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;} .tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top} </style> <table class="tg"> <thead> <tr> <th class="tg-c3ow" colspan="2">Label Definitions</th> </tr> </thead> <tbody> <tr> <td class="tg-c3ow">Label 0</td> <td class="tg-c3ow">Michael</td> </tr> <tr> <td class="tg-c3ow">Label 1</td> <td class="tg-c3ow">Dwight</td> </tr> </tbody> </table>
{}
Alireza1044/bert_classification_lm
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
A simple model trained on dialogues of characters in NBC series, 'The Office'. The model can do a binary classification between 'Michael Scott' and 'Dwight Shrute''s dialogues. <style type="text/css"> .tg {border-collapse:collapse;border-spacing:0;} .tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; overflow:hidden;padding:10px 5px;word-break:normal;} .tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;} .tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top} </style> <table class="tg"> <thead> <tr> <th class="tg-c3ow" colspan="2">Label Definitions</th> </tr> </thead> <tbody> <tr> <td class="tg-c3ow">Label 0</td> <td class="tg-c3ow">Michael</td> </tr> <tr> <td class="tg-c3ow">Label 1</td> <td class="tg-c3ow">Dwight</td> </tr> </tbody> </table>
[]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
transformers
#HarryBoy
{"tags": ["conversational"]}
AllwynJ/HarryBoy
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#HarryBoy
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart50-ft-si-en This model was trained from scratch on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 5.0476 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.98 | 30 | 5.6367 | | No log | 1.98 | 60 | 4.1221 | | No log | 2.98 | 90 | 3.1880 | | No log | 3.98 | 120 | 3.1175 | | No log | 4.98 | 150 | 3.3575 | | No log | 5.98 | 180 | 3.7855 | | No log | 6.98 | 210 | 4.3530 | | No log | 7.98 | 240 | 4.7216 | | No log | 8.98 | 270 | 4.9202 | | No log | 9.98 | 300 | 5.0476 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.6.0 - Datasets 1.11.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "model_index": [{"name": "mbart50-ft-si-en", "results": [{"task": {"name": "Sequence-to-sequence Language Modeling", "type": "text2text-generation"}}]}]}
Aloka/mbart50-ft-si-en
null
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #mbart #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
mbart50-ft-si-en ================ This model was trained from scratch on an unkown dataset. It achieves the following results on the evaluation set: * Loss: 5.0476 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.9.2 * Pytorch 1.6.0 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.6.0\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #mbart #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.6.0\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7272 - Matthews Correlation: 0.5343 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5219 | 1.0 | 535 | 0.5340 | 0.4215 | | 0.3467 | 2.0 | 1070 | 0.5131 | 0.5181 | | 0.2331 | 3.0 | 1605 | 0.6406 | 0.5040 | | 0.1695 | 4.0 | 2140 | 0.7272 | 0.5343 | | 0.1212 | 5.0 | 2675 | 0.8399 | 0.5230 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5343023846000738, "name": "Matthews Correlation"}]}]}]}
Alstractor/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-cola ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.7272 * Matthews Correlation: 0.5343 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.12.3 * Pytorch 1.9.0+cu111 * Datasets 1.15.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
null
transformers
# Wav2vec2-base for Danish This wav2vec2-base model has been pretrained on ~1300 hours of danish speech data. The pretraining data consists of podcasts and audiobooks and is unfortunately not public available. However, we were allowed to distribute the pretrained model. This model was pretrained on 16kHz sampled speech audio. When using the model, make sure to use speech audio sampled at 16kHz. The pre-training was done using the fairseq library in January 2021. It needs to be fine-tuned to perform speech recognition. # Finetuning In order to finetune the model to speech recognition, you can draw inspiration from this [notebook tutorial](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F) or [this blog post tutorial](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2).
{"language": "da", "license": "apache-2.0", "tags": ["speech"]}
Alvenir/wav2vec2-base-da
null
[ "transformers", "pytorch", "wav2vec2", "pretraining", "speech", "da", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "da" ]
TAGS #transformers #pytorch #wav2vec2 #pretraining #speech #da #license-apache-2.0 #endpoints_compatible #region-us
# Wav2vec2-base for Danish This wav2vec2-base model has been pretrained on ~1300 hours of danish speech data. The pretraining data consists of podcasts and audiobooks and is unfortunately not public available. However, we were allowed to distribute the pretrained model. This model was pretrained on 16kHz sampled speech audio. When using the model, make sure to use speech audio sampled at 16kHz. The pre-training was done using the fairseq library in January 2021. It needs to be fine-tuned to perform speech recognition. # Finetuning In order to finetune the model to speech recognition, you can draw inspiration from this notebook tutorial or this blog post tutorial.
[ "# Wav2vec2-base for Danish\nThis wav2vec2-base model has been pretrained on ~1300 hours of danish speech data. The pretraining data consists of podcasts and audiobooks and is unfortunately not public available. However, we were allowed to distribute the pretrained model.\n\nThis model was pretrained on 16kHz sampled speech audio. When using the model, make sure to use speech audio sampled at 16kHz.\n\nThe pre-training was done using the fairseq library in January 2021.\n\nIt needs to be fine-tuned to perform speech recognition.", "# Finetuning\nIn order to finetune the model to speech recognition, you can draw inspiration from this notebook tutorial or this blog post tutorial." ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #pretraining #speech #da #license-apache-2.0 #endpoints_compatible #region-us \n", "# Wav2vec2-base for Danish\nThis wav2vec2-base model has been pretrained on ~1300 hours of danish speech data. The pretraining data consists of podcasts and audiobooks and is unfortunately not public available. However, we were allowed to distribute the pretrained model.\n\nThis model was pretrained on 16kHz sampled speech audio. When using the model, make sure to use speech audio sampled at 16kHz.\n\nThe pre-training was done using the fairseq library in January 2021.\n\nIt needs to be fine-tuned to perform speech recognition.", "# Finetuning\nIn order to finetune the model to speech recognition, you can draw inspiration from this notebook tutorial or this blog post tutorial." ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-schizophreniaReddit2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7785 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 490 | 1.8093 | | 1.9343 | 2.0 | 980 | 1.7996 | | 1.8856 | 3.0 | 1470 | 1.7966 | | 1.8552 | 4.0 | 1960 | 1.7844 | | 1.8267 | 5.0 | 2450 | 1.7839 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "roberta-base-finetuned-schizophreniaReddit2", "results": []}]}
Amalq/roberta-base-finetuned-schizophreniaReddit2
null
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
roberta-base-finetuned-schizophreniaReddit2 =========================================== This model is a fine-tuned version of roberta-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.7785 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.14.1 * Pytorch 1.10.0+cu111 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
question-answering
transformers
# Question Answering NLU Question Answering NLU (QANLU) is an approach that maps the NLU task into question answering, leveraging pre-trained question-answering models to perform well on few-shot settings. Instead of training an intent classifier or a slot tagger, for example, we can ask the model intent- and slot-related questions in natural language: ``` Context : Yes. No. I'm looking for a cheap flight to Boston. Question: Is the user looking to book a flight? Answer : Yes Question: Is the user asking about departure time? Answer : No Question: What price is the user looking for? Answer : cheap Question: Where is the user flying from? Answer : (empty) ``` Note the "Yes. No. " prepended in the context. Those are to allow the model to answer intent-related questions (e.g. "Is the user looking for a restaurant?"). Thus, by asking questions for each intent and slot in natural language, we can effectively construct an NLU hypothesis. For more details, please read the paper: [Language model is all you need: Natural language understanding as question answering](https://assets.amazon.science/33/ea/800419b24a09876601d8ab99bfb9/language-model-is-all-you-need-natural-language-understanding-as-question-answering.pdf). ## Model training Instructions for how to train and evaluate a QANLU model, as well as the necessary code for ATIS are in the [Amazon Science repository](https://github.com/amazon-research/question-answering-nlu). ## Intended use and limitations This model has been fine-tuned on ATIS (English) and is intended to demonstrate the power of this approach. For other domains or tasks, it should be further fine-tuned on relevant data. ## Use in transformers: ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline tokenizer = AutoTokenizer.from_pretrained("AmazonScience/qanlu", use_auth_token=True) model = AutoModelForQuestionAnswering.from_pretrained("AmazonScience/qanlu", use_auth_token=True) qa_pipeline = pipeline('question-answering', model=model, tokenizer=tokenizer) qa_input = { 'context': 'Yes. No. I want a cheap flight to Boston.', 'question': 'What is the destination?' } answer = qa_pipeline(qa_input) ``` ## Citation If you use this work, please cite: ``` @inproceedings{namazifar2021language, title={Language model is all you need: Natural language understanding as question answering}, author={Namazifar, Mahdi and Papangelis, Alexandros and Tur, Gokhan and Hakkani-T{\"u}r, Dilek}, booktitle={ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7803--7807}, year={2021}, organization={IEEE} } ``` ## License This library is licensed under the CC BY NC License.
{"language": "en", "license": "cc-by-4.0", "datasets": ["atis"], "widget": [{"context": "Yes. No. I'm looking for a cheap flight to Boston."}]}
AmazonScience/qanlu
null
[ "transformers", "pytorch", "roberta", "question-answering", "en", "dataset:atis", "license:cc-by-4.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #roberta #question-answering #en #dataset-atis #license-cc-by-4.0 #endpoints_compatible #has_space #region-us
# Question Answering NLU Question Answering NLU (QANLU) is an approach that maps the NLU task into question answering, leveraging pre-trained question-answering models to perform well on few-shot settings. Instead of training an intent classifier or a slot tagger, for example, we can ask the model intent- and slot-related questions in natural language: Note the "Yes. No. " prepended in the context. Those are to allow the model to answer intent-related questions (e.g. "Is the user looking for a restaurant?"). Thus, by asking questions for each intent and slot in natural language, we can effectively construct an NLU hypothesis. For more details, please read the paper: Language model is all you need: Natural language understanding as question answering. ## Model training Instructions for how to train and evaluate a QANLU model, as well as the necessary code for ATIS are in the Amazon Science repository. ## Intended use and limitations This model has been fine-tuned on ATIS (English) and is intended to demonstrate the power of this approach. For other domains or tasks, it should be further fine-tuned on relevant data. ## Use in transformers: If you use this work, please cite: ## License This library is licensed under the CC BY NC License.
[ "# Question Answering NLU\n\nQuestion Answering NLU (QANLU) is an approach that maps the NLU task into question answering, \nleveraging pre-trained question-answering models to perform well on few-shot settings. Instead of \ntraining an intent classifier or a slot tagger, for example, we can ask the model intent- and \nslot-related questions in natural language: \n\n\n\nNote the \"Yes. No. \" prepended in the context. Those are to allow the model to answer intent-related questions (e.g. \"Is the user looking for a restaurant?\").\n\nThus, by asking questions for each intent and slot in natural language, we can effectively construct an NLU hypothesis. For more details, please read the paper: Language model is all you need: Natural language understanding as question answering.", "## Model training\n\nInstructions for how to train and evaluate a QANLU model, as well as the necessary code for ATIS are in the Amazon Science repository.", "## Intended use and limitations\n\nThis model has been fine-tuned on ATIS (English) and is intended to demonstrate the power of this approach. For other domains or tasks, it should be further fine-tuned \non relevant data.", "## Use in transformers:\n\n\n\nIf you use this work, please cite:", "## License\n\nThis library is licensed under the CC BY NC License." ]
[ "TAGS\n#transformers #pytorch #roberta #question-answering #en #dataset-atis #license-cc-by-4.0 #endpoints_compatible #has_space #region-us \n", "# Question Answering NLU\n\nQuestion Answering NLU (QANLU) is an approach that maps the NLU task into question answering, \nleveraging pre-trained question-answering models to perform well on few-shot settings. Instead of \ntraining an intent classifier or a slot tagger, for example, we can ask the model intent- and \nslot-related questions in natural language: \n\n\n\nNote the \"Yes. No. \" prepended in the context. Those are to allow the model to answer intent-related questions (e.g. \"Is the user looking for a restaurant?\").\n\nThus, by asking questions for each intent and slot in natural language, we can effectively construct an NLU hypothesis. For more details, please read the paper: Language model is all you need: Natural language understanding as question answering.", "## Model training\n\nInstructions for how to train and evaluate a QANLU model, as well as the necessary code for ATIS are in the Amazon Science repository.", "## Intended use and limitations\n\nThis model has been fine-tuned on ATIS (English) and is intended to demonstrate the power of this approach. For other domains or tasks, it should be further fine-tuned \non relevant data.", "## Use in transformers:\n\n\n\nIf you use this work, please cite:", "## License\n\nThis library is licensed under the CC BY NC License." ]
image-classification
transformers
# indian-foods Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### idli ![idli](images/idli.jpg) #### kachori ![kachori](images/kachori.jpg) #### pani puri ![pani puri](images/pani_puri.jpg) #### samosa ![samosa](images/samosa.jpg) #### vada pav ![vada pav](images/vada_pav.jpg)
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
Amrrs/indian-foods
null
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
# indian-foods Autogenerated by HuggingPics️ Create your own image classifier for anything by running the demo on Google Colab. Report any issues with the demo at the github repo. ## Example Images #### idli !idli #### kachori !kachori #### pani puri !pani puri #### samosa !samosa #### vada pav !vada pav
[ "# indian-foods\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.", "## Example Images", "#### idli\n\n!idli", "#### kachori\n\n!kachori", "#### pani puri\n\n!pani puri", "#### samosa\n\n!samosa", "#### vada pav\n\n!vada pav" ]
[ "TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# indian-foods\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.", "## Example Images", "#### idli\n\n!idli", "#### kachori\n\n!kachori", "#### pani puri\n\n!pani puri", "#### samosa\n\n!samosa", "#### vada pav\n\n!vada pav" ]
image-classification
transformers
# south-indian-foods Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### dosai ![dosai](images/dosai.jpg) #### idiyappam ![idiyappam](images/idiyappam.jpg) #### idli ![idli](images/idli.jpg) #### puttu ![puttu](images/puttu.jpg) #### vadai ![vadai](images/vadai.jpg)
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
Amrrs/south-indian-foods
null
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
# south-indian-foods Autogenerated by HuggingPics️ Create your own image classifier for anything by running the demo on Google Colab. Report any issues with the demo at the github repo. ## Example Images #### dosai !dosai #### idiyappam !idiyappam #### idli !idli #### puttu !puttu #### vadai !vadai
[ "# south-indian-foods\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.", "## Example Images", "#### dosai\n\n!dosai", "#### idiyappam\n\n!idiyappam", "#### idli\n\n!idli", "#### puttu\n\n!puttu", "#### vadai\n\n!vadai" ]
[ "TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "# south-indian-foods\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.", "## Example Images", "#### dosai\n\n!dosai", "#### idiyappam\n\n!idiyappam", "#### idli\n\n!idli", "#### puttu\n\n!puttu", "#### vadai\n\n!vadai" ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Tamil Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ta", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil") model = Wav2Vec2ForCTC.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ta", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil") model = Wav2Vec2ForCTC.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 82.94 % ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://colab.research.google.com/drive/1-Klkgr4f-C9SanHfVC5RhP0ELUH6TYlN?usp=sharing)
{"language": "ta", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Tamil by Amrrs", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ta", "type": "common_voice", "args": "ta"}, "metrics": [{"type": "wer", "value": 82.94, "name": "Test WER"}]}]}]}
Amrrs/wav2vec2-large-xlsr-53-tamil
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ta", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ta" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ta #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
# Wav2Vec2-Large-XLSR-53-Tamil Fine-tuned facebook/wav2vec2-large-xlsr-53 in Tamil using the Common Voice When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. Test Result: 82.94 % ## Training The Common Voice 'train', 'validation' datasets were used for training. The script used for training can be found here
[ "# Wav2Vec2-Large-XLSR-53-Tamil\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Tamil using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the {language} test data of Common Voice.\n\n\n\n\nTest Result: 82.94 %", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training.\n\nThe script used for training can be found here" ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ta #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n", "# Wav2Vec2-Large-XLSR-53-Tamil\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Tamil using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the {language} test data of Common Voice.\n\n\n\n\nTest Result: 82.94 %", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training.\n\nThe script used for training can be found here" ]
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 479512837 - CO2 Emissions (in grams): 123.88023112815048 ## Validation Metrics - Loss: 0.6220805048942566 - Accuracy: 0.7961119332705503 - Macro F1: 0.7616345204219084 - Micro F1: 0.7961119332705503 - Weighted F1: 0.795387503907883 - Macro Precision: 0.782839455262034 - Micro Precision: 0.7961119332705503 - Weighted Precision: 0.7992606754484262 - Macro Recall: 0.7451485972167191 - Micro Recall: 0.7961119332705503 - Weighted Recall: 0.7961119332705503 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Anamika/autonlp-Feedback1-479512837 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Anamika/autonlp-Feedback1-479512837", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Anamika/autonlp-Feedback1-479512837", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "unk", "tags": "autonlp", "datasets": ["Anamika/autonlp-data-Feedback1"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 123.88023112815048}
Anamika/autonlp-Feedback1-479512837
null
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "autonlp", "unk", "dataset:Anamika/autonlp-data-Feedback1", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "unk" ]
TAGS #transformers #pytorch #xlm-roberta #text-classification #autonlp #unk #dataset-Anamika/autonlp-data-Feedback1 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 479512837 - CO2 Emissions (in grams): 123.88023112815048 ## Validation Metrics - Loss: 0.6220805048942566 - Accuracy: 0.7961119332705503 - Macro F1: 0.7616345204219084 - Micro F1: 0.7961119332705503 - Weighted F1: 0.795387503907883 - Macro Precision: 0.782839455262034 - Micro Precision: 0.7961119332705503 - Weighted Precision: 0.7992606754484262 - Macro Recall: 0.7451485972167191 - Micro Recall: 0.7961119332705503 - Weighted Recall: 0.7961119332705503 ## Usage You can use cURL to access this model: Or Python API:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 479512837\n- CO2 Emissions (in grams): 123.88023112815048", "## Validation Metrics\n\n- Loss: 0.6220805048942566\n- Accuracy: 0.7961119332705503\n- Macro F1: 0.7616345204219084\n- Micro F1: 0.7961119332705503\n- Weighted F1: 0.795387503907883\n- Macro Precision: 0.782839455262034\n- Micro Precision: 0.7961119332705503\n- Weighted Precision: 0.7992606754484262\n- Macro Recall: 0.7451485972167191\n- Micro Recall: 0.7961119332705503\n- Weighted Recall: 0.7961119332705503", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #text-classification #autonlp #unk #dataset-Anamika/autonlp-data-Feedback1 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 479512837\n- CO2 Emissions (in grams): 123.88023112815048", "## Validation Metrics\n\n- Loss: 0.6220805048942566\n- Accuracy: 0.7961119332705503\n- Macro F1: 0.7616345204219084\n- Micro F1: 0.7961119332705503\n- Weighted F1: 0.795387503907883\n- Macro Precision: 0.782839455262034\n- Micro Precision: 0.7961119332705503\n- Weighted Precision: 0.7992606754484262\n- Macro Recall: 0.7451485972167191\n- Micro Recall: 0.7961119332705503\n- Weighted Recall: 0.7961119332705503", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 473312409 - CO2 Emissions (in grams): 25.128735714898614 ## Validation Metrics - Loss: 0.6010786890983582 - Accuracy: 0.7990650945370823 - Macro F1: 0.7429662929144928 - Micro F1: 0.7990650945370823 - Weighted F1: 0.7977660363770382 - Macro Precision: 0.7744390888231261 - Micro Precision: 0.7990650945370823 - Weighted Precision: 0.800444194278352 - Macro Recall: 0.7198278524814119 - Micro Recall: 0.7990650945370823 - Weighted Recall: 0.7990650945370823 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Anamika/autonlp-fa-473312409 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Anamika/autonlp-fa-473312409", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Anamika/autonlp-fa-473312409", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "en", "tags": "autonlp", "datasets": ["Anamika/autonlp-data-fa"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 25.128735714898614}
Anamika/autonlp-fa-473312409
null
[ "transformers", "pytorch", "roberta", "text-classification", "autonlp", "en", "dataset:Anamika/autonlp-data-fa", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #roberta #text-classification #autonlp #en #dataset-Anamika/autonlp-data-fa #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 473312409 - CO2 Emissions (in grams): 25.128735714898614 ## Validation Metrics - Loss: 0.6010786890983582 - Accuracy: 0.7990650945370823 - Macro F1: 0.7429662929144928 - Micro F1: 0.7990650945370823 - Weighted F1: 0.7977660363770382 - Macro Precision: 0.7744390888231261 - Micro Precision: 0.7990650945370823 - Weighted Precision: 0.800444194278352 - Macro Recall: 0.7198278524814119 - Micro Recall: 0.7990650945370823 - Weighted Recall: 0.7990650945370823 ## Usage You can use cURL to access this model: Or Python API:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 473312409\n- CO2 Emissions (in grams): 25.128735714898614", "## Validation Metrics\n\n- Loss: 0.6010786890983582\n- Accuracy: 0.7990650945370823\n- Macro F1: 0.7429662929144928\n- Micro F1: 0.7990650945370823\n- Weighted F1: 0.7977660363770382\n- Macro Precision: 0.7744390888231261\n- Micro Precision: 0.7990650945370823\n- Weighted Precision: 0.800444194278352\n- Macro Recall: 0.7198278524814119\n- Micro Recall: 0.7990650945370823\n- Weighted Recall: 0.7990650945370823", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
[ "TAGS\n#transformers #pytorch #roberta #text-classification #autonlp #en #dataset-Anamika/autonlp-data-fa #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 473312409\n- CO2 Emissions (in grams): 25.128735714898614", "## Validation Metrics\n\n- Loss: 0.6010786890983582\n- Accuracy: 0.7990650945370823\n- Macro F1: 0.7429662929144928\n- Micro F1: 0.7990650945370823\n- Weighted F1: 0.7977660363770382\n- Macro Precision: 0.7744390888231261\n- Micro Precision: 0.7990650945370823\n- Weighted Precision: 0.800444194278352\n- Macro Recall: 0.7198278524814119\n- Micro Recall: 0.7990650945370823\n- Weighted Recall: 0.7990650945370823", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra_large_discriminator_squad2_512 This model is a fine-tuned version of [ahotrod/electra_large_discriminator_squad2_512](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
{"tags": ["generated_from_trainer"], "model-index": [{"name": "electra_large_discriminator_squad2_512", "results": []}]}
Andranik/TestQA2
null
[ "transformers", "pytorch", "electra", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #electra #question-answering #generated_from_trainer #endpoints_compatible #region-us
# electra_large_discriminator_squad2_512 This model is a fine-tuned version of ahotrod/electra_large_discriminator_squad2_512 on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
[ "# electra_large_discriminator_squad2_512\n\nThis model is a fine-tuned version of ahotrod/electra_large_discriminator_squad2_512 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #electra #question-answering #generated_from_trainer #endpoints_compatible #region-us \n", "# electra_large_discriminator_squad2_512\n\nThis model is a fine-tuned version of ahotrod/electra_large_discriminator_squad2_512 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
text2text-generation
transformers
This is a pretrained model that was loaded from t5-base. It has been adapted and changed by changing the max_length and summary_length.
{}
AndreLiu1225/t5-news
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
This is a pretrained model that was loaded from t5-base. It has been adapted and changed by changing the max_length and summary_length.
[]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # model-QA-5-epoch-RU This model is a fine-tuned version of [AndrewChar/diplom-prod-epoch-4-datast-sber-QA](https://huggingface.co/AndrewChar/diplom-prod-epoch-4-datast-sber-QA) on sberquad dataset. It achieves the following results on the evaluation set: - Train Loss: 1.1991 - Validation Loss: 0.0 - Epoch: 5 ## Model description Модель отвечающая на вопрос по контектсу это дипломная работа ## Intended uses & limitations Контекст должен содержать не более 512 токенов ## Training and evaluation data DataSet SberSQuAD {'exact_match': 54.586, 'f1': 73.644} ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_re': 2e-06 'decay_steps': 2986, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.1991 | | 5 | ### Framework versions - Transformers 4.15.0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
{"language": "ru", "tags": ["generated_from_keras_callback"], "datasets": ["sberquad"], "model-index": [{"name": "model-QA-5-epoch-RU", "results": []}]}
AndrewChar/model-QA-5-epoch-RU
null
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "ru", "dataset:sberquad", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ru" ]
TAGS #transformers #tf #distilbert #question-answering #generated_from_keras_callback #ru #dataset-sberquad #endpoints_compatible #region-us
model-QA-5-epoch-RU =================== This model is a fine-tuned version of AndrewChar/diplom-prod-epoch-4-datast-sber-QA on sberquad dataset. It achieves the following results on the evaluation set: * Train Loss: 1.1991 * Validation Loss: 0.0 * Epoch: 5 Model description ----------------- Модель отвечающая на вопрос по контектсу это дипломная работа Intended uses & limitations --------------------------- Контекст должен содержать не более 512 токенов Training and evaluation data ---------------------------- DataSet SberSQuAD {'exact\_match': 54.586, 'f1': 73.644} Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'learning\_rate': {'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_re': 2e-06 'decay\_steps': 2986, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.15.0 * TensorFlow 2.7.0 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_re': 2e-06 'decay\\_steps': 2986, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* TensorFlow 2.7.0\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #tf #distilbert #question-answering #generated_from_keras_callback #ru #dataset-sberquad #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_re': 2e-06 'decay\\_steps': 2986, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* TensorFlow 2.7.0\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - DE dataset. It achieves the following results on the evaluation set: - Loss: 0.1355 - Wer: 0.1532 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 2.5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.0826 | 0.07 | 1000 | 0.4637 | 0.4654 | | 1.118 | 0.15 | 2000 | 0.2595 | 0.2687 | | 1.1268 | 0.22 | 3000 | 0.2635 | 0.2661 | | 1.0919 | 0.29 | 4000 | 0.2417 | 0.2566 | | 1.1013 | 0.37 | 5000 | 0.2414 | 0.2567 | | 1.0898 | 0.44 | 6000 | 0.2546 | 0.2731 | | 1.0808 | 0.51 | 7000 | 0.2399 | 0.2535 | | 1.0719 | 0.59 | 8000 | 0.2353 | 0.2528 | | 1.0446 | 0.66 | 9000 | 0.2427 | 0.2545 | | 1.0347 | 0.73 | 10000 | 0.2266 | 0.2402 | | 1.0457 | 0.81 | 11000 | 0.2290 | 0.2448 | | 1.0124 | 0.88 | 12000 | 0.2295 | 0.2448 | | 1.025 | 0.95 | 13000 | 0.2138 | 0.2345 | | 1.0107 | 1.03 | 14000 | 0.2108 | 0.2294 | | 0.9758 | 1.1 | 15000 | 0.2019 | 0.2204 | | 0.9547 | 1.17 | 16000 | 0.2000 | 0.2178 | | 0.986 | 1.25 | 17000 | 0.2018 | 0.2200 | | 0.9588 | 1.32 | 18000 | 0.1992 | 0.2138 | | 0.9413 | 1.39 | 19000 | 0.1898 | 0.2049 | | 0.9339 | 1.47 | 20000 | 0.1874 | 0.2056 | | 0.9268 | 1.54 | 21000 | 0.1797 | 0.1976 | | 0.9194 | 1.61 | 22000 | 0.1743 | 0.1905 | | 0.8987 | 1.69 | 23000 | 0.1738 | 0.1932 | | 0.8884 | 1.76 | 24000 | 0.1703 | 0.1873 | | 0.8939 | 1.83 | 25000 | 0.1633 | 0.1831 | | 0.8629 | 1.91 | 26000 | 0.1549 | 0.1750 | | 0.8607 | 1.98 | 27000 | 0.1550 | 0.1738 | | 0.8316 | 2.05 | 28000 | 0.1512 | 0.1709 | | 0.8321 | 2.13 | 29000 | 0.1481 | 0.1657 | | 0.825 | 2.2 | 30000 | 0.1446 | 0.1627 | | 0.8115 | 2.27 | 31000 | 0.1396 | 0.1583 | | 0.7959 | 2.35 | 32000 | 0.1389 | 0.1569 | | 0.7835 | 2.42 | 33000 | 0.1362 | 0.1545 | | 0.7959 | 2.49 | 34000 | 0.1355 | 0.1531 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1B-german --dataset mozilla-foundation/common_voice_8_0 --config de --split test --log_outputs ``` 2. To evaluate on test dev data ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1B-german --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
{"language": ["de"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "de", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300M - German", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "de"}, "metrics": [{"type": "wer", "value": 15.25, "name": "Test WER"}, {"type": "cer", "value": 3.78, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "de"}, "metrics": [{"type": "wer", "value": 35.29, "name": "Test WER"}, {"type": "cer", "value": 13.83, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "de"}, "metrics": [{"type": "wer", "value": 36.2, "name": "Test WER"}]}]}]}
AndrewMcDowell/wav2vec2-xls-r-1B-german
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "de", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "de" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #de #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - DE dataset. It achieves the following results on the evaluation set: * Loss: 0.1355 * Wer: 0.1532 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 2.5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.2.dev0 * Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test' 2. To evaluate on test dev data
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 2.5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0", "#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'\n2. To evaluate on test dev data" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #de #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 2.5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0", "#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'\n2. To evaluate on test dev data" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AR dataset. It achieves the following results on the evaluation set: - Loss: 1.1373 - Wer: 0.8607 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 2.2416 | 0.84 | 500 | 1.2867 | 0.8875 | | 2.3089 | 1.67 | 1000 | 1.8336 | 0.9548 | | 2.3614 | 2.51 | 1500 | 1.5937 | 0.9469 | | 2.5234 | 3.35 | 2000 | 1.9765 | 0.9867 | | 2.5373 | 4.19 | 2500 | 1.9062 | 0.9916 | | 2.5703 | 5.03 | 3000 | 1.9772 | 0.9915 | | 2.4656 | 5.86 | 3500 | 1.8083 | 0.9829 | | 2.4339 | 6.7 | 4000 | 1.7548 | 0.9752 | | 2.344 | 7.54 | 4500 | 1.6146 | 0.9638 | | 2.2677 | 8.38 | 5000 | 1.5105 | 0.9499 | | 2.2074 | 9.21 | 5500 | 1.4191 | 0.9357 | | 2.3768 | 10.05 | 6000 | 1.6663 | 0.9665 | | 2.3804 | 10.89 | 6500 | 1.6571 | 0.9720 | | 2.3237 | 11.72 | 7000 | 1.6049 | 0.9637 | | 2.317 | 12.56 | 7500 | 1.5875 | 0.9655 | | 2.2988 | 13.4 | 8000 | 1.5357 | 0.9603 | | 2.2906 | 14.24 | 8500 | 1.5637 | 0.9592 | | 2.2848 | 15.08 | 9000 | 1.5326 | 0.9537 | | 2.2381 | 15.91 | 9500 | 1.5631 | 0.9508 | | 2.2072 | 16.75 | 10000 | 1.4565 | 0.9395 | | 2.197 | 17.59 | 10500 | 1.4304 | 0.9406 | | 2.198 | 18.43 | 11000 | 1.4230 | 0.9382 | | 2.1668 | 19.26 | 11500 | 1.3998 | 0.9315 | | 2.1498 | 20.1 | 12000 | 1.3920 | 0.9258 | | 2.1244 | 20.94 | 12500 | 1.3584 | 0.9153 | | 2.0953 | 21.78 | 13000 | 1.3274 | 0.9054 | | 2.0762 | 22.61 | 13500 | 1.2933 | 0.9073 | | 2.0587 | 23.45 | 14000 | 1.2516 | 0.8944 | | 2.0363 | 24.29 | 14500 | 1.2214 | 0.8902 | | 2.0302 | 25.13 | 15000 | 1.2087 | 0.8871 | | 2.0071 | 25.96 | 15500 | 1.1953 | 0.8786 | | 1.9882 | 26.8 | 16000 | 1.1738 | 0.8712 | | 1.9772 | 27.64 | 16500 | 1.1647 | 0.8672 | | 1.9585 | 28.48 | 17000 | 1.1459 | 0.8635 | | 1.944 | 29.31 | 17500 | 1.1414 | 0.8616 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
{"language": ["ar"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]}
AndrewMcDowell/wav2vec2-xls-r-1b-arabic
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ar", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ar" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ar #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - AR dataset. It achieves the following results on the evaluation set: * Loss: 1.1373 * Wer: 0.8607 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 6.5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 30.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.2.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 30.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ar #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 30.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - JA dataset. It achieves the following results on the evaluation set: - Loss: 0.5500 - Wer: 1.0132 - Cer: 0.1609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 1.7019 | 12.65 | 1000 | 1.0510 | 0.9832 | 0.2589 | | 1.6385 | 25.31 | 2000 | 0.6670 | 0.9915 | 0.1851 | | 1.4344 | 37.97 | 3000 | 0.6183 | 1.0213 | 0.1797 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana --dataset mozilla-foundation/common_voice_8_0 --config ja --split test --log_outputs ``` 2. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
{"language": ["ja"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "ja", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ja"}, "metrics": [{"type": "wer", "value": 95.33, "name": "Test WER"}, {"type": "cer", "value": 22.27, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "de"}, "metrics": [{"type": "wer", "value": 100.0, "name": "Test WER"}, {"type": "cer", "value": 30.33, "name": "Test CER"}, {"type": "cer", "value": 29.63, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ja"}, "metrics": [{"type": "cer", "value": 32.69, "name": "Test CER"}]}]}]}
AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "ja", "hf-asr-leaderboard", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #ja #hf-asr-leaderboard #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - JA dataset. It achieves the following results on the evaluation set: * Loss: 0.5500 * Wer: 1.0132 * Cer: 0.1609 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1500 * num\_epochs: 50.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.2.dev0 * Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test' 2. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1500\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0", "#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'\n2. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #ja #hf-asr-leaderboard #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1500\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0", "#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'\n2. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AR dataset. It achieves the following results on the evaluation set: - Loss: 0.4502 - Wer: 0.4783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 4.7972 | 0.43 | 500 | 5.1401 | 1.0 | | 3.3241 | 0.86 | 1000 | 3.3220 | 1.0 | | 3.1432 | 1.29 | 1500 | 3.0806 | 0.9999 | | 2.9297 | 1.72 | 2000 | 2.5678 | 1.0057 | | 2.2593 | 2.14 | 2500 | 1.1068 | 0.8218 | | 2.0504 | 2.57 | 3000 | 0.7878 | 0.7114 | | 1.937 | 3.0 | 3500 | 0.6955 | 0.6450 | | 1.8491 | 3.43 | 4000 | 0.6452 | 0.6304 | | 1.803 | 3.86 | 4500 | 0.5961 | 0.6042 | | 1.7545 | 4.29 | 5000 | 0.5550 | 0.5748 | | 1.7045 | 4.72 | 5500 | 0.5374 | 0.5743 | | 1.6733 | 5.15 | 6000 | 0.5337 | 0.5404 | | 1.6761 | 5.57 | 6500 | 0.5054 | 0.5266 | | 1.655 | 6.0 | 7000 | 0.4926 | 0.5243 | | 1.6252 | 6.43 | 7500 | 0.4946 | 0.5183 | | 1.6209 | 6.86 | 8000 | 0.4915 | 0.5194 | | 1.5772 | 7.29 | 8500 | 0.4725 | 0.5104 | | 1.5602 | 7.72 | 9000 | 0.4726 | 0.5097 | | 1.5783 | 8.15 | 9500 | 0.4667 | 0.4956 | | 1.5442 | 8.58 | 10000 | 0.4685 | 0.4937 | | 1.5597 | 9.01 | 10500 | 0.4708 | 0.4957 | | 1.5406 | 9.43 | 11000 | 0.4539 | 0.4810 | | 1.5274 | 9.86 | 11500 | 0.4502 | 0.4783 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["ar"], "license": "apache-2.0", "tags": ["ar", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Arabic", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "ar"}, "metrics": [{"type": "wer", "value": 47.54, "name": "Test WER"}, {"type": "cer", "value": 17.64, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ar"}, "metrics": [{"type": "wer", "value": 93.72, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ar"}, "metrics": [{"type": "wer", "value": 92.49, "name": "Test WER"}]}]}]}
AndrewMcDowell/wav2vec2-xls-r-300m-arabic
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ar", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ar" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #ar #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - AR dataset. It achieves the following results on the evaluation set: * Loss: 0.4502 * Wer: 0.4783 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 5.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #ar #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. eval results: WER: 0.20161578657865786 CER: 0.05062357805269733 --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - DE dataset. It achieves the following results on the evaluation set: - Loss: 0.1768 - Wer: 0.2016 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 3.4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 5.7531 | 0.04 | 500 | 5.4564 | 1.0 | | 2.9882 | 0.08 | 1000 | 3.0041 | 1.0 | | 2.1953 | 0.13 | 1500 | 1.1723 | 0.7121 | | 1.2406 | 0.17 | 2000 | 0.3656 | 0.3623 | | 1.1294 | 0.21 | 2500 | 0.2843 | 0.2926 | | 1.0731 | 0.25 | 3000 | 0.2554 | 0.2664 | | 1.051 | 0.3 | 3500 | 0.2387 | 0.2535 | | 1.0479 | 0.34 | 4000 | 0.2345 | 0.2512 | | 1.0026 | 0.38 | 4500 | 0.2270 | 0.2452 | | 0.9921 | 0.42 | 5000 | 0.2212 | 0.2353 | | 0.9839 | 0.47 | 5500 | 0.2141 | 0.2330 | | 0.9907 | 0.51 | 6000 | 0.2122 | 0.2334 | | 0.9788 | 0.55 | 6500 | 0.2114 | 0.2270 | | 0.9687 | 0.59 | 7000 | 0.2066 | 0.2323 | | 0.9777 | 0.64 | 7500 | 0.2033 | 0.2237 | | 0.9476 | 0.68 | 8000 | 0.2020 | 0.2194 | | 0.9625 | 0.72 | 8500 | 0.1977 | 0.2191 | | 0.9497 | 0.76 | 9000 | 0.1976 | 0.2175 | | 0.9781 | 0.81 | 9500 | 0.1956 | 0.2159 | | 0.9552 | 0.85 | 10000 | 0.1958 | 0.2191 | | 0.9345 | 0.89 | 10500 | 0.1964 | 0.2158 | | 0.9528 | 0.93 | 11000 | 0.1926 | 0.2154 | | 0.9502 | 0.98 | 11500 | 0.1953 | 0.2149 | | 0.9358 | 1.02 | 12000 | 0.1927 | 0.2167 | | 0.941 | 1.06 | 12500 | 0.1901 | 0.2115 | | 0.9287 | 1.1 | 13000 | 0.1936 | 0.2090 | | 0.9491 | 1.15 | 13500 | 0.1900 | 0.2104 | | 0.9478 | 1.19 | 14000 | 0.1931 | 0.2120 | | 0.946 | 1.23 | 14500 | 0.1914 | 0.2134 | | 0.9499 | 1.27 | 15000 | 0.1931 | 0.2173 | | 0.9346 | 1.32 | 15500 | 0.1913 | 0.2105 | | 0.9509 | 1.36 | 16000 | 0.1902 | 0.2137 | | 0.9294 | 1.4 | 16500 | 0.1895 | 0.2086 | | 0.9418 | 1.44 | 17000 | 0.1913 | 0.2183 | | 0.9302 | 1.49 | 17500 | 0.1884 | 0.2114 | | 0.9418 | 1.53 | 18000 | 0.1894 | 0.2108 | | 0.9363 | 1.57 | 18500 | 0.1886 | 0.2132 | | 0.9338 | 1.61 | 19000 | 0.1856 | 0.2078 | | 0.9185 | 1.66 | 19500 | 0.1852 | 0.2056 | | 0.9216 | 1.7 | 20000 | 0.1874 | 0.2095 | | 0.9176 | 1.74 | 20500 | 0.1873 | 0.2078 | | 0.9288 | 1.78 | 21000 | 0.1865 | 0.2097 | | 0.9278 | 1.83 | 21500 | 0.1869 | 0.2100 | | 0.9295 | 1.87 | 22000 | 0.1878 | 0.2095 | | 0.9221 | 1.91 | 22500 | 0.1852 | 0.2121 | | 0.924 | 1.95 | 23000 | 0.1855 | 0.2042 | | 0.9104 | 2.0 | 23500 | 0.1858 | 0.2105 | | 0.9284 | 2.04 | 24000 | 0.1850 | 0.2080 | | 0.9162 | 2.08 | 24500 | 0.1839 | 0.2045 | | 0.9111 | 2.12 | 25000 | 0.1838 | 0.2080 | | 0.91 | 2.17 | 25500 | 0.1889 | 0.2106 | | 0.9152 | 2.21 | 26000 | 0.1856 | 0.2026 | | 0.9209 | 2.25 | 26500 | 0.1891 | 0.2133 | | 0.9094 | 2.29 | 27000 | 0.1857 | 0.2089 | | 0.9065 | 2.34 | 27500 | 0.1840 | 0.2052 | | 0.9156 | 2.38 | 28000 | 0.1833 | 0.2062 | | 0.8986 | 2.42 | 28500 | 0.1789 | 0.2001 | | 0.9045 | 2.46 | 29000 | 0.1769 | 0.2022 | | 0.9039 | 2.51 | 29500 | 0.1819 | 0.2073 | | 0.9145 | 2.55 | 30000 | 0.1828 | 0.2063 | | 0.9081 | 2.59 | 30500 | 0.1811 | 0.2049 | | 0.9252 | 2.63 | 31000 | 0.1833 | 0.2086 | | 0.8957 | 2.68 | 31500 | 0.1795 | 0.2083 | | 0.891 | 2.72 | 32000 | 0.1809 | 0.2058 | | 0.9023 | 2.76 | 32500 | 0.1812 | 0.2061 | | 0.8918 | 2.8 | 33000 | 0.1775 | 0.1997 | | 0.8852 | 2.85 | 33500 | 0.1790 | 0.1997 | | 0.8928 | 2.89 | 34000 | 0.1767 | 0.2013 | | 0.9079 | 2.93 | 34500 | 0.1735 | 0.1986 | | 0.9032 | 2.97 | 35000 | 0.1793 | 0.2024 | | 0.9018 | 3.02 | 35500 | 0.1778 | 0.2027 | | 0.8846 | 3.06 | 36000 | 0.1776 | 0.2046 | | 0.8848 | 3.1 | 36500 | 0.1812 | 0.2064 | | 0.9062 | 3.14 | 37000 | 0.1800 | 0.2018 | | 0.9011 | 3.19 | 37500 | 0.1783 | 0.2049 | | 0.8996 | 3.23 | 38000 | 0.1810 | 0.2036 | | 0.893 | 3.27 | 38500 | 0.1805 | 0.2056 | | 0.897 | 3.31 | 39000 | 0.1773 | 0.2035 | | 0.8992 | 3.36 | 39500 | 0.1804 | 0.2054 | | 0.8987 | 3.4 | 40000 | 0.1768 | 0.2016 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test` ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-german-de --dataset mozilla-foundation/common_voice_7_0 --config de --split test --log_outputs ``` 2. To evaluate on test dev data ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-german-de --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
{"language": ["de"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "de", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - German", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "de"}, "metrics": [{"type": "wer", "value": 20.16, "name": "Test WER"}, {"type": "cer", "value": 5.06, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "de"}, "metrics": [{"type": "wer", "value": 39.79, "name": "Test WER"}, {"type": "cer", "value": 15.02, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "de"}, "metrics": [{"type": "wer", "value": 47.95, "name": "Test WER"}]}]}]}
AndrewMcDowell/wav2vec2-xls-r-300m-german-de
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "de" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #de #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - DE dataset. It achieves the following results on the evaluation set: * Loss: 0.1768 * Wer: 0.2016 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 3.4 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on 'mozilla-foundation/common\_voice\_7\_0' with split 'test' 2. To evaluate on test dev data
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 3.4\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0", "#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7\\_0' with split 'test'\n2. To evaluate on test dev data" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #de #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 3.4\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0", "#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7\\_0' with split 'test'\n2. To evaluate on test dev data" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - JA dataset. Kanji are converted into Hiragana using the [pykakasi](https://pykakasi.readthedocs.io/en/latest/index.html) library during training and evaluation. The model can output both Hiragana and Katakana characters. Since there is no spacing, WER is not a suitable metric for evaluating performance and CER is more suitable. On mozilla-foundation/common_voice_8_0 it achieved: - cer: 23.64% On speech-recognition-community-v2/dev_data it achieved: - cer: 30.99% It achieves the following results on the evaluation set: - Loss: 0.5212 - Wer: 1.3068 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 48 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 4.0974 | 4.72 | 1000 | 4.0178 | 1.9535 | | 2.1276 | 9.43 | 2000 | 0.9301 | 1.2128 | | 1.7622 | 14.15 | 3000 | 0.7103 | 1.5527 | | 1.6397 | 18.87 | 4000 | 0.6729 | 1.4269 | | 1.5468 | 23.58 | 5000 | 0.6087 | 1.2497 | | 1.4885 | 28.3 | 6000 | 0.5786 | 1.3222 | | 1.451 | 33.02 | 7000 | 0.5726 | 1.3768 | | 1.3912 | 37.74 | 8000 | 0.5518 | 1.2497 | | 1.3617 | 42.45 | 9000 | 0.5352 | 1.2694 | | 1.3113 | 47.17 | 10000 | 0.5228 | 1.2781 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-japanese --dataset mozilla-foundation/common_voice_8_0 --config ja --split test --log_outputs ``` 2. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-japanese --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
{"language": ["ja"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "ja", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300-m", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ja"}, "metrics": [{"type": "wer", "value": 95.82, "name": "Test WER"}, {"type": "cer", "value": 23.64, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "de"}, "metrics": [{"type": "wer", "value": 100.0, "name": "Test WER"}, {"type": "cer", "value": 30.99, "name": "Test CER"}, {"type": "cer", "value": 30.37, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ja"}, "metrics": [{"type": "cer", "value": 34.42, "name": "Test CER"}]}]}]}
AndrewMcDowell/wav2vec2-xls-r-300m-japanese
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "ja", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #ja #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - JA dataset. Kanji are converted into Hiragana using the pykakasi library during training and evaluation. The model can output both Hiragana and Katakana characters. Since there is no spacing, WER is not a suitable metric for evaluating performance and CER is more suitable. On mozilla-foundation/common\_voice\_8\_0 it achieved: * cer: 23.64% On speech-recognition-community-v2/dev\_data it achieved: * cer: 30.99% It achieves the following results on the evaluation set: * Loss: 0.5212 * Wer: 1.3068 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 48 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 50.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.2.dev0 * Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test' 2. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0", "#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'\n2. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #ja #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0", "#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'\n2. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbert-finetuned-ner This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.1264 - Precision: 0.9305 - Recall: 0.9375 - F1: 0.9340 - Accuracy: 0.9700 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.301 | 1.0 | 625 | 0.1756 | 0.8843 | 0.9067 | 0.8953 | 0.9500 | | 0.1259 | 2.0 | 1250 | 0.1248 | 0.9285 | 0.9335 | 0.9310 | 0.9688 | | 0.0895 | 3.0 | 1875 | 0.1264 | 0.9305 | 0.9375 | 0.9340 | 0.9700 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "mbert-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "lv"}, "metrics": [{"type": "precision", "value": 0.9304986338797814, "name": "Precision"}, {"type": "recall", "value": 0.9375430144528561, "name": "Recall"}, {"type": "f1", "value": 0.9340075419952005, "name": "F1"}, {"type": "accuracy", "value": 0.9699674740348558, "name": "Accuracy"}]}]}]}
Andrey1989/mbert-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:wikiann", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-wikiann #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
mbert-finetuned-ner =================== This model is a fine-tuned version of bert-base-multilingual-cased on the wikiann dataset. It achieves the following results on the evaluation set: * Loss: 0.1264 * Precision: 0.9305 * Recall: 0.9375 * F1: 0.9340 * Accuracy: 0.9700 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.19.4 * Pytorch 1.11.0+cu113 * Datasets 2.2.2 * Tokenizers 0.12.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.19.4\n* Pytorch 1.11.0+cu113\n* Datasets 2.2.2\n* Tokenizers 0.12.1" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-wikiann #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.19.4\n* Pytorch 1.11.0+cu113\n* Datasets 2.2.2\n* Tokenizers 0.12.1" ]
token-classification
transformers
This model is a finetuning of bert-base-greek-uncased as a Token Classifier which predicts at each token which punctuation mark it is followed by. The model preprocesses everything to lowercase and removes all Greek diacritics. For information on pretraining of the Greek Bert model, please refer to [Greek Bert](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1) # Finetuning Parameters Epochs: 5 Maximum Sequence Length: 512 Learning Rate: 4e−5 Batch Size: 16 Finetuning Data: Greek Europarl data available at: https://opus.nlpl.eu/Europarl.php Tokens: 44.1M Sentences: 1.6M Punctuation Points Recognised: '.' (0) : Full stop ',' (1) : Comma ';' (2) : Greek question mark '-' (3) : Dash ':' (4) : Semicolon '0' (5) : No punctuation point is following # Load Finetuned Model ~~~ from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("Andrianos/bert-base-greek-punctuation-prediction-finetuned") model = AutoModelForTokenClassification.from_pretrained("Andrianos/bert-base-greek-punctuation-prediction-finetuned") ~~~ # Using the Model If you are interested in trying out examples and finding the limitations of the model, the starter Python code to use the model is available at [Github Repo](https://github.com/Andrian0s/Greek-Transformer-Model-Punctuation-Prediction) # Examples of the Model Using the demo script, we tried out a few brief examples and show the results below Input | Input with Predictions ------------- | ------------- "προσεκτικά στον δρομο θα σε περιμενω" | "προσεκτικα στον δρομο, θα σε περιμενω" "τι θα φας για βραδινο" | "τι θα φας για βραδινο;" "κυριε μαυροκέφαλε εσπασε η κεραια του διαδικτυου θα παρω τηλεφωνο την cyta" | "κυριε μαυροκεφαλε, εσπασε η κεραια του διαδικτυου. θα παρω τηλεφωνο την cyta." "κυριε μαυροκεφαλε εσπασεν η αντεννα του ιντερνετ εννα πιαω τηλεφωνον την cyta" | "κυριε μαυροκεφαλε, εσπασεν η αντεννα του ιντερνετ. εννα πιαω τηλεφωνον την cyta." The last two examples have identical meanings, the first is written in plain Modern Greek and the latter in the Cypriot Dialect. It is interesting to see the model performs similarly, even if some words and suffixes are out of vocabulary. # Further Performance Improvements We would be happy to hear people have finetuned this model with more and diverse datasets, as we expect this to increase robustness. Within our research, improvements to consistency in punctuation prediction have shown to be possible with techniques such as sliding windows (during inference) for larger documents, weighted loss and ensembling of different models. Make sure to cite our work when you further our models with the aforementioned techniques. # Author This model is further work based on the winning submission at Shared Task 2 Sentence End and Punctuation Prediction in NLG Text at SwissText2021. The winning submission is entitled "UZH OnPoint at Swisstext-2021: Sentence End and Punctuation Prediction in NLG Text Through Ensembling of Different Transformers" in the Proceedings of the 6th SwissText Held Online. It is publicly available at http://ceur-ws.org/Vol-2957/sepp_paper2.pdf If you use the model, please cite the following: @inproceedings{ST2021-OnPoint, title={UZH OnPoint at Swisstext-2021: Sentence End and Punctuation Prediction in NLG Text Through Ensembling of Different Transformers}, author={Michail, Andrianos and Wehrli, Silvan and Bucková, Terézia}, booktitle={Proceedings of the 1st Shared Task on Sentence End and Punctuation Prediction in NLG Text (SEPPNLG 2021) at SwissText 2021}, year={2021} } Model Finetuned and released by Andrianos Michail with resources provided by [Department of Computational Linguistics, University of Zurich](https://www.cl.uzh.ch/en.html) | Github: [@Andrian0s](https://github.com/Andrian0s) | LinkedIn: [amichail2](https://www.linkedin.com/in/amichail2/)
{}
Andrianos/bert-base-greek-punctuation-prediction-finetuned
null
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us
This model is a finetuning of bert-base-greek-uncased as a Token Classifier which predicts at each token which punctuation mark it is followed by. The model preprocesses everything to lowercase and removes all Greek diacritics. For information on pretraining of the Greek Bert model, please refer to Greek Bert Finetuning Parameters ===================== Epochs: 5 Maximum Sequence Length: 512 Learning Rate: 4e−5 Batch Size: 16 Finetuning Data: Greek Europarl data available at: URL Tokens: 44.1M Sentences: 1.6M Punctuation Points Recognised: '.' (0) : Full stop ',' (1) : Comma ';' (2) : Greek question mark '-' (3) : Dash ':' (4) : Semicolon '0' (5) : No punctuation point is following Load Finetuned Model ==================== ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("Andrianos/bert-base-greek-punctuation-prediction-finetuned") model = AutoModelForTokenClassification.from_pretrained("Andrianos/bert-base-greek-punctuation-prediction-finetuned") ``` Using the Model =============== If you are interested in trying out examples and finding the limitations of the model, the starter Python code to use the model is available at Github Repo Examples of the Model ===================== Using the demo script, we tried out a few brief examples and show the results below The last two examples have identical meanings, the first is written in plain Modern Greek and the latter in the Cypriot Dialect. It is interesting to see the model performs similarly, even if some words and suffixes are out of vocabulary. Further Performance Improvements ================================ We would be happy to hear people have finetuned this model with more and diverse datasets, as we expect this to increase robustness. Within our research, improvements to consistency in punctuation prediction have shown to be possible with techniques such as sliding windows (during inference) for larger documents, weighted loss and ensembling of different models. Make sure to cite our work when you further our models with the aforementioned techniques. Author ====== This model is further work based on the winning submission at Shared Task 2 Sentence End and Punctuation Prediction in NLG Text at SwissText2021. The winning submission is entitled "UZH OnPoint at Swisstext-2021: Sentence End and Punctuation Prediction in NLG Text Through Ensembling of Different Transformers" in the Proceedings of the 6th SwissText Held Online. It is publicly available at URL If you use the model, please cite the following: @inproceedings{ST2021-OnPoint, title={UZH OnPoint at Swisstext-2021: Sentence End and Punctuation Prediction in NLG Text Through Ensembling of Different Transformers}, author={Michail, Andrianos and Wehrli, Silvan and Bucková, Terézia}, booktitle={Proceedings of the 1st Shared Task on Sentence End and Punctuation Prediction in NLG Text (SEPPNLG 2021) at SwissText 2021}, year={2021} } Model Finetuned and released by Andrianos Michail with resources provided by Department of Computational Linguistics, University of Zurich | Github: @Andrian0s | LinkedIn: amichail2
[]
[ "TAGS\n#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
token-classification
transformers
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges. Abbreviation|Description -|- O|Outside of a named entity B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity I-MIS | Miscellaneous entity B-PER |Beginning of a person's name right after another person's name B-DERIV-PER| Begginning derivative that describes relation to a person I-PER |Person's name B-ORG |Beginning of an organization right after another organization I-ORG |organization B-LOC |Beginning of a location right after another location I-LOC |Location
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "datasets": ["hr500k"], "widget": [{"text": "Moje ime je Aleksandar i zivim u Beogradu pored Vlade Republike Srbije"}]}
Andrija/M-bert-NER
null
[ "transformers", "pytorch", "bert", "token-classification", "hr", "sr", "multilingual", "dataset:hr500k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #bert #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges.
[]
[ "TAGS\n#transformers #pytorch #bert #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
null
null
from transformers import RobertaTokenizerFast tokenizer = RobertaTokenizerFast.from_pretrained('Andrija/RobertaFastBPE', bos_token="&lt;s&gt;", eos_token="&lt;/s&gt;") encoded = tokenizer('Stručnjaci te bolnice, predvođeni dr Alisom Lim') # {'input_ids': [0, 47541, 34632, 603, 24817, 16, 27540, 6768, 2350, 2803, 3991, 2733, 81, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} tokenizer.decode(encoded['input_ids']) # &lt;s&gt;Stručnjaci te bolnice, predvođeni dr Alisom Lim&lt;/s&gt;
{}
Andrija/RobertaFastBPE
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
from transformers import RobertaTokenizerFast tokenizer = RobertaTokenizerFast.from_pretrained('Andrija/RobertaFastBPE', bos_token="&lt;s&gt;", eos_token="&lt;/s&gt;") encoded = tokenizer('Stručnjaci te bolnice, predvođeni dr Alisom Lim') # {'input_ids': [0, 47541, 34632, 603, 24817, 16, 27540, 6768, 2350, 2803, 3991, 2733, 81, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} URL(encoded['input_ids']) # &lt;s&gt;Stručnjaci te bolnice, predvođeni dr Alisom Lim&lt;/s&gt;
[ "# {'input_ids': [0, 47541, 34632, 603, 24817, 16, 27540, 6768, 2350, 2803, 3991, 2733, 81, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}\nURL(encoded['input_ids'])", "# &lt;s&gt;Stručnjaci te bolnice, predvođeni dr Alisom Lim&lt;/s&gt;" ]
[ "TAGS\n#region-us \n", "# {'input_ids': [0, 47541, 34632, 603, 24817, 16, 27540, 6768, 2350, 2803, 3991, 2733, 81, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}\nURL(encoded['input_ids'])", "# &lt;s&gt;Stručnjaci te bolnice, predvođeni dr Alisom Lim&lt;/s&gt;" ]
fill-mask
transformers
# Transformer language model for Croatian and Serbian Trained on 43GB datasets that contain Croatian and Serbian language for one epochs (9.6 mil. steps, 3 epochs). Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr datasets Validation number of exampels run for perplexity:1620487 sentences Perplexity:6.02 Start loss: 8.6 Final loss: 2.0 Thoughts: Model could be trained more, the training did not stagnate. | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `Andrija/SRoBERTa-F` | 80M | Fifth | Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr (43 GB of text) |
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "tags": ["masked-lm"], "datasets": ["oscar", "srwac", "leipzig", "cc100", "hrwac"], "widget": [{"text": "Ovo je po\u010detak <mask>."}]}
Andrija/SRoBERTa-F
null
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "masked-lm", "hr", "sr", "multilingual", "dataset:oscar", "dataset:srwac", "dataset:leipzig", "dataset:cc100", "dataset:hrwac", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #tensorboard #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-srwac #dataset-leipzig #dataset-cc100 #dataset-hrwac #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Transformer language model for Croatian and Serbian =================================================== Trained on 43GB datasets that contain Croatian and Serbian language for one epochs (9.6 mil. steps, 3 epochs). Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr datasets Validation number of exampels run for perplexity:1620487 sentences Perplexity:6.02 Start loss: 8.6 Final loss: 2.0 Thoughts: Model could be trained more, the training did not stagnate.
[]
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-srwac #dataset-leipzig #dataset-cc100 #dataset-hrwac #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
token-classification
transformers
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges. Abbreviation|Description -|- O|Outside of a named entity B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity I-MIS | Miscellaneous entity B-PER |Beginning of a person’s name right after another person’s name B-DERIV-PER| Begginning derivative that describes relation to a person I-PER |Person’s name B-ORG |Beginning of an organization right after another organization I-ORG |organization B-LOC |Beginning of a location right after another location I-LOC |Location
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "datasets": ["hr500k"], "widget": [{"text": "Moje ime je Aleksandar i zivim u Beogradu pored Vlade Republike Srbije"}]}
Andrija/SRoBERTa-L-NER
null
[ "transformers", "pytorch", "roberta", "token-classification", "hr", "sr", "multilingual", "dataset:hr500k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges.
[]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
fill-mask
transformers
# Transformer language model for Croatian and Serbian Trained on 6GB datasets that contain Croatian and Serbian language for two epochs (500k steps). Leipzig, OSCAR and srWac datasets | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `Andrija/SRoBERTa-L` | 80M | Third | Leipzig Corpus, OSCAR and srWac (6 GB of text) |
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "tags": ["masked-lm"], "datasets": ["oscar", "srwac", "leipzig"], "widget": [{"text": "Ovo je po\u010detak <mask>."}]}
Andrija/SRoBERTa-L
null
[ "transformers", "pytorch", "roberta", "fill-mask", "masked-lm", "hr", "sr", "multilingual", "dataset:oscar", "dataset:srwac", "dataset:leipzig", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-srwac #dataset-leipzig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Transformer language model for Croatian and Serbian =================================================== Trained on 6GB datasets that contain Croatian and Serbian language for two epochs (500k steps). Leipzig, OSCAR and srWac datasets
[]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-srwac #dataset-leipzig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
token-classification
transformers
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges. Abbreviation|Description -|- O|Outside of a named entity B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity I-MIS | Miscellaneous entity B-PER |Beginning of a person’s name right after another person’s name B-DERIV-PER| Begginning derivative that describes relation to a person I-PER |Person’s name B-ORG |Beginning of an organization right after another organization I-ORG |organization B-LOC |Beginning of a location right after another location I-LOC |Location
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "datasets": ["hr500k"], "widget": [{"text": "Moje ime je Aleksandar i zivim u Beogradu pored Vlade Republike Srbije"}]}
Andrija/SRoBERTa-NER
null
[ "transformers", "pytorch", "roberta", "token-classification", "hr", "sr", "multilingual", "dataset:hr500k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges.
[]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
token-classification
transformers
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges. Abbreviation|Description -|- O|Outside of a named entity B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity I-MIS | Miscellaneous entity B-PER |Beginning of a person's name right after another person's name B-DERIV-PER| Begginning derivative that describes relation to a person I-PER |Person's name B-ORG |Beginning of an organization right after another organization I-ORG |organization B-LOC |Beginning of a location right after another location I-LOC |Location
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "datasets": ["hr500k"], "widget": [{"text": "Moje ime je Aleksandar i zivim u Beogradu pored Vlade Republike Srbije"}]}
Andrija/SRoBERTa-XL-NER
null
[ "transformers", "pytorch", "roberta", "token-classification", "hr", "sr", "multilingual", "dataset:hr500k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges.
[]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
fill-mask
transformers
# Transformer language model for Croatian and Serbian Trained on 28GB datasets that contain Croatian and Serbian language for one epochs (3 mil. steps). Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr datasets | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `Andrija/SRoBERTa-XL` | 80M | Forth | Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr (28 GB of text) |
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "tags": ["masked-lm"], "datasets": ["oscar", "srwac", "leipzig", "cc100", "hrwac"], "widget": [{"text": "Ovo je po\u010detak <mask>."}]}
Andrija/SRoBERTa-XL
null
[ "transformers", "pytorch", "roberta", "fill-mask", "masked-lm", "hr", "sr", "multilingual", "dataset:oscar", "dataset:srwac", "dataset:leipzig", "dataset:cc100", "dataset:hrwac", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-srwac #dataset-leipzig #dataset-cc100 #dataset-hrwac #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Transformer language model for Croatian and Serbian =================================================== Trained on 28GB datasets that contain Croatian and Serbian language for one epochs (3 mil. steps). Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr datasets
[]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-srwac #dataset-leipzig #dataset-cc100 #dataset-hrwac #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
token-classification
transformers
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges. Abbreviation|Description -|- O|Outside of a named entity B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity I-MIS | Miscellaneous entity B-PER |Beginning of a person’s name right after another person’s name B-DERIV-PER| Begginning derivative that describes relation to a person I-PER |Person’s name B-ORG |Beginning of an organization right after another organization I-ORG |organization B-LOC |Beginning of a location right after another location I-LOC |Location
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "datasets": ["hr500k"], "widget": [{"text": "Moje ime je Aleksandar i zivim u Beogradu pored Vlade Republike Srbije"}]}
Andrija/SRoBERTa-base-NER
null
[ "transformers", "pytorch", "roberta", "token-classification", "hr", "sr", "multilingual", "dataset:hr500k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges.
[]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
fill-mask
transformers
# Transformer language model for Croatian and Serbian Trained on 3GB datasets that contain Croatian and Serbian language for two epochs. Leipzig and OSCAR datasets # Information of dataset | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `Andrija/SRoBERTa-base` | 80M | Second | Leipzig Corpus and OSCAR (3 GB of text) |
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "tags": ["masked-lm"], "datasets": ["oscar", "leipzig"], "widget": [{"text": "Ovo je po\u010detak <mask>."}]}
Andrija/SRoBERTa-base
null
[ "transformers", "pytorch", "roberta", "fill-mask", "masked-lm", "hr", "sr", "multilingual", "dataset:oscar", "dataset:leipzig", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-leipzig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Transformer language model for Croatian and Serbian =================================================== Trained on 3GB datasets that contain Croatian and Serbian language for two epochs. Leipzig and OSCAR datasets Information of dataset ======================
[]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-leipzig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
fill-mask
transformers
# Transformer language model for Croatian and Serbian Trained on 0.7GB dataset Croatian and Serbian language for one epoch. Dataset from Leipzig Corpora. # Information of dataset | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `Andrija/SRoBERTa` | 120M | First | Leipzig Corpus (0.7 GB of text) | # How to use in code ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("Andrija/SRoBERTa") model = AutoModelForMaskedLM.from_pretrained("Andrija/SRoBERTa") ```
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "tags": ["masked-lm"], "datasets": ["leipzig"], "widget": [{"text": "Gde je <mask>."}]}
Andrija/SRoBERTa
null
[ "transformers", "pytorch", "roberta", "fill-mask", "masked-lm", "hr", "sr", "multilingual", "dataset:leipzig", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-leipzig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Transformer language model for Croatian and Serbian =================================================== Trained on 0.7GB dataset Croatian and Serbian language for one epoch. Dataset from Leipzig Corpora. Information of dataset ====================== How to use in code ==================
[]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-leipzig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
null
null
C:\Users\andry\Desktop\Выжигание 24-12-2021.jpg
{}
Andry/1111
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
C:\Users\andry\Desktop\Выжигание URL
[]
[ "TAGS\n#region-us \n" ]
null
null
Now we only upload two models for creating demos for image and video classification. More models and code can be found in our github repo: [UniFormer](https://github.com/Sense-X/UniFormer).
{"license": "mit"}
Andy1621/uniformer
null
[ "license:mit", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #license-mit #has_space #region-us
Now we only upload two models for creating demos for image and video classification. More models and code can be found in our github repo: UniFormer.
[]
[ "TAGS\n#license-mit #has_space #region-us \n" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0609 - Precision: 0.9275 - Recall: 0.9365 - F1: 0.9320 - Accuracy: 0.9840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2527 | 1.0 | 878 | 0.0706 | 0.9120 | 0.9181 | 0.9150 | 0.9803 | | 0.0517 | 2.0 | 1756 | 0.0603 | 0.9174 | 0.9349 | 0.9261 | 0.9830 | | 0.031 | 3.0 | 2634 | 0.0609 | 0.9275 | 0.9365 | 0.9320 | 0.9840 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.984018301110458}}]}]}
Ann2020/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-ner ===================================== This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0609 * Precision: 0.9275 * Recall: 0.9365 * F1: 0.9320 * Accuracy: 0.9840 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.9.2 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
feature-extraction
transformers
Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see https://openreview.net/forum?id=cGB7CMFtrSx This is based on bert-base-uncased model and pre-trained for text input
{}
Anonymous/ReasonBERT-BERT
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us
Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see URL This is based on bert-base-uncased model and pre-trained for text input
[]
[ "TAGS\n#transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us \n" ]
feature-extraction
transformers
Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see https://openreview.net/forum?id=cGB7CMFtrSx This is based on roberta-base model and pre-trained for text input
{}
Anonymous/ReasonBERT-RoBERTa
null
[ "transformers", "pytorch", "roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #roberta #feature-extraction #endpoints_compatible #region-us
Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see URL This is based on roberta-base model and pre-trained for text input
[]
[ "TAGS\n#transformers #pytorch #roberta #feature-extraction #endpoints_compatible #region-us \n" ]
feature-extraction
transformers
Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see https://openreview.net/forum?id=cGB7CMFtrSx This is based on tapas-base(no_reset) model and pre-trained for table input
{}
Anonymous/ReasonBERT-TAPAS
null
[ "transformers", "pytorch", "tapas", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tapas #feature-extraction #endpoints_compatible #region-us
Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see URL This is based on tapas-base(no_reset) model and pre-trained for table input
[]
[ "TAGS\n#transformers #pytorch #tapas #feature-extraction #endpoints_compatible #region-us \n" ]
text2text-generation
transformers
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 20384195 - CO2 Emissions (in grams): 4.214012748213151 ## Validation Metrics - Loss: 1.0120062828063965 - Rouge1: 41.1808 - Rouge2: 26.2564 - RougeL: 31.3106 - RougeLsum: 38.9991 - Gen Len: 58.45 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/Anorak/autonlp-Niravana-test2-20384195 ```
{"language": "unk", "tags": "autonlp", "datasets": ["Anorak/autonlp-data-Niravana-test2"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 4.214012748213151}
Anorak/nirvana
null
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autonlp", "unk", "dataset:Anorak/autonlp-data-Niravana-test2", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "unk" ]
TAGS #transformers #pytorch #pegasus #text2text-generation #autonlp #unk #dataset-Anorak/autonlp-data-Niravana-test2 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 20384195 - CO2 Emissions (in grams): 4.214012748213151 ## Validation Metrics - Loss: 1.0120062828063965 - Rouge1: 41.1808 - Rouge2: 26.2564 - RougeL: 31.3106 - RougeLsum: 38.9991 - Gen Len: 58.45 ## Usage You can use cURL to access this model:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 20384195\n- CO2 Emissions (in grams): 4.214012748213151", "## Validation Metrics\n\n- Loss: 1.0120062828063965\n- Rouge1: 41.1808\n- Rouge2: 26.2564\n- RougeL: 31.3106\n- RougeLsum: 38.9991\n- Gen Len: 58.45", "## Usage\n\nYou can use cURL to access this model:" ]
[ "TAGS\n#transformers #pytorch #pegasus #text2text-generation #autonlp #unk #dataset-Anorak/autonlp-data-Niravana-test2 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 20384195\n- CO2 Emissions (in grams): 4.214012748213151", "## Validation Metrics\n\n- Loss: 1.0120062828063965\n- Rouge1: 41.1808\n- Rouge2: 26.2564\n- RougeL: 31.3106\n- RougeLsum: 38.9991\n- Gen Len: 58.45", "## Usage\n\nYou can use cURL to access this model:" ]
text-generation
transformers
# Rick Sanchez DialoGPT Model
{"tags": ["conversational"]}
AnthonyNelson/DialoGPT-small-ricksanchez
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Rick Sanchez DialoGPT Model
[ "# Rick Sanchez DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Rick Sanchez DialoGPT Model" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Anthos23/distilbert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0662 - Validation Loss: 0.2623 - Train Accuracy: 0.9083 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 21045, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.2101 | 0.2373 | 0.9083 | 0 | | 0.1065 | 0.2645 | 0.9060 | 1 | | 0.0662 | 0.2623 | 0.9083 | 2 | ### Framework versions - Transformers 4.17.0.dev0 - TensorFlow 2.5.0 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Anthos23/distilbert-base-uncased-finetuned-sst2", "results": []}]}
Anthos23/distilbert-base-uncased-finetuned-sst2
null
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #tf #tensorboard #distilbert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Anthos23/distilbert-base-uncased-finetuned-sst2 =============================================== This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.0662 * Validation Loss: 0.2623 * Train Accuracy: 0.9083 * Epoch: 2 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'learning\_rate': {'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 21045, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.17.0.dev0 * TensorFlow 2.5.0 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 21045, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* TensorFlow 2.5.0\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #tf #tensorboard #distilbert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 21045, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* TensorFlow 2.5.0\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
text-generation
transformers
# Jordan DialoGPT Model
{"tags": ["conversational"]}
Apisate/DialoGPT-small-jordan
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Jordan DialoGPT Model
[ "# Jordan DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Jordan DialoGPT Model" ]
text2text-generation
transformers
Idea is to build a model which will take keywords as inputs and generate sentences as outputs. Potential use case can include: - Marketing - Search Engine Optimization - Topic generation etc. - Fine tuning of topic modeling models
{"language": "en", "tags": ["keytotext", "k2t", "Keywords to Sentences"], "thumbnail": "Keywords to Sentences"}
Apoorva/k2t-test
null
[ "transformers", "pytorch", "t5", "text2text-generation", "keytotext", "k2t", "Keywords to Sentences", "en", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #t5 #text2text-generation #keytotext #k2t #Keywords to Sentences #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Idea is to build a model which will take keywords as inputs and generate sentences as outputs. Potential use case can include: - Marketing - Search Engine Optimization - Topic generation etc. - Fine tuning of topic modeling models
[]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #keytotext #k2t #Keywords to Sentences #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-finetuned-ner This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0700 - Precision: 0.9301 - Recall: 0.9376 - F1: 0.9338 - Accuracy: 0.9852 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.096 | 1.0 | 1756 | 0.0752 | 0.9163 | 0.9201 | 0.9182 | 0.9811 | | 0.0481 | 2.0 | 3512 | 0.0761 | 0.9169 | 0.9293 | 0.9231 | 0.9830 | | 0.0251 | 3.0 | 5268 | 0.0700 | 0.9301 | 0.9376 | 0.9338 | 0.9852 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "albert-base-v2-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9301181102362205, "name": "Precision"}, {"type": "recall", "value": 0.9376033513394334, "name": "Recall"}, {"type": "f1", "value": 0.9338457315399397, "name": "F1"}, {"type": "accuracy", "value": 0.9851613086447802, "name": "Accuracy"}]}]}]}
ArBert/albert-base-v2-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "albert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #albert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
albert-base-v2-finetuned-ner ============================ This model is a fine-tuned version of albert-base-v2 on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0700 * Precision: 0.9301 * Recall: 0.9376 * F1: 0.9338 * Accuracy: 0.9852 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.14.1 * Pytorch 1.10.1 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.1\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #albert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.1\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-ner-kmeans This model is a fine-tuned version of [ArBert/bert-base-uncased-finetuned-ner](https://huggingface.co/ArBert/bert-base-uncased-finetuned-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1169 - Precision: 0.9084 - Recall: 0.9245 - F1: 0.9164 - Accuracy: 0.9792 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.036 | 1.0 | 1123 | 0.1010 | 0.9086 | 0.9117 | 0.9101 | 0.9779 | | 0.0214 | 2.0 | 2246 | 0.1094 | 0.9033 | 0.9199 | 0.9115 | 0.9784 | | 0.014 | 3.0 | 3369 | 0.1169 | 0.9084 | 0.9245 | 0.9164 | 0.9792 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-uncased-finetuned-ner-kmeans", "results": []}]}
ArBert/bert-base-uncased-finetuned-ner-kmeans
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bert-base-uncased-finetuned-ner-kmeans ====================================== This model is a fine-tuned version of ArBert/bert-base-uncased-finetuned-ner on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1169 * Precision: 0.9084 * Recall: 0.9245 * F1: 0.9164 * Accuracy: 0.9792 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0905 - Precision: 0.9068 - Recall: 0.9200 - F1: 0.9133 - Accuracy: 0.9787 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1266 | 1.0 | 1123 | 0.0952 | 0.8939 | 0.8869 | 0.8904 | 0.9742 | | 0.0741 | 2.0 | 2246 | 0.0866 | 0.8936 | 0.9247 | 0.9089 | 0.9774 | | 0.0496 | 3.0 | 3369 | 0.0905 | 0.9068 | 0.9200 | 0.9133 | 0.9787 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-uncased-finetuned-ner", "results": []}]}
ArBert/bert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bert-base-uncased-finetuned-ner =============================== This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.0905 * Precision: 0.9068 * Recall: 0.9200 * F1: 0.9133 * Accuracy: 0.9787 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-ner-agglo-twitter This model is a fine-tuned version of [ArBert/roberta-base-finetuned-ner](https://huggingface.co/ArBert/roberta-base-finetuned-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6645 - Precision: 0.6885 - Recall: 0.7665 - F1: 0.7254 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 245 | 0.2820 | 0.6027 | 0.7543 | 0.6700 | | No log | 2.0 | 490 | 0.2744 | 0.6308 | 0.7864 | 0.7000 | | 0.2301 | 3.0 | 735 | 0.2788 | 0.6433 | 0.7637 | 0.6984 | | 0.2301 | 4.0 | 980 | 0.3255 | 0.6834 | 0.7221 | 0.7022 | | 0.1153 | 5.0 | 1225 | 0.3453 | 0.6686 | 0.7439 | 0.7043 | | 0.1153 | 6.0 | 1470 | 0.3988 | 0.6797 | 0.7420 | 0.7094 | | 0.0617 | 7.0 | 1715 | 0.4711 | 0.6702 | 0.7259 | 0.6969 | | 0.0617 | 8.0 | 1960 | 0.4904 | 0.6904 | 0.7505 | 0.7192 | | 0.0328 | 9.0 | 2205 | 0.5088 | 0.6591 | 0.7713 | 0.7108 | | 0.0328 | 10.0 | 2450 | 0.5709 | 0.6468 | 0.7788 | 0.7067 | | 0.019 | 11.0 | 2695 | 0.5570 | 0.6642 | 0.7533 | 0.7059 | | 0.019 | 12.0 | 2940 | 0.5574 | 0.6899 | 0.7656 | 0.7258 | | 0.0131 | 13.0 | 3185 | 0.5858 | 0.6952 | 0.7609 | 0.7265 | | 0.0131 | 14.0 | 3430 | 0.6239 | 0.6556 | 0.7826 | 0.7135 | | 0.0074 | 15.0 | 3675 | 0.5931 | 0.6825 | 0.7599 | 0.7191 | | 0.0074 | 16.0 | 3920 | 0.6364 | 0.6785 | 0.7580 | 0.7161 | | 0.005 | 17.0 | 4165 | 0.6437 | 0.6855 | 0.7580 | 0.7199 | | 0.005 | 18.0 | 4410 | 0.6610 | 0.6779 | 0.7599 | 0.7166 | | 0.0029 | 19.0 | 4655 | 0.6625 | 0.6853 | 0.7656 | 0.7232 | | 0.0029 | 20.0 | 4900 | 0.6645 | 0.6885 | 0.7665 | 0.7254 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1"], "model-index": [{"name": "roberta-base-finetuned-ner-agglo-twitter", "results": []}]}
ArBert/roberta-base-finetuned-ner-agglo-twitter
null
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
roberta-base-finetuned-ner-agglo-twitter ======================================== This model is a fine-tuned version of ArBert/roberta-base-finetuned-ner on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.6645 * Precision: 0.6885 * Recall: 0.7665 * F1: 0.7254 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 20 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-ner-kmeans-twitter This model is a fine-tuned version of [ArBert/roberta-base-finetuned-ner](https://huggingface.co/ArBert/roberta-base-finetuned-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6645 - Precision: 0.6885 - Recall: 0.7665 - F1: 0.7254 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 245 | 0.2820 | 0.6027 | 0.7543 | 0.6700 | | No log | 2.0 | 490 | 0.2744 | 0.6308 | 0.7864 | 0.7000 | | 0.2301 | 3.0 | 735 | 0.2788 | 0.6433 | 0.7637 | 0.6984 | | 0.2301 | 4.0 | 980 | 0.3255 | 0.6834 | 0.7221 | 0.7022 | | 0.1153 | 5.0 | 1225 | 0.3453 | 0.6686 | 0.7439 | 0.7043 | | 0.1153 | 6.0 | 1470 | 0.3988 | 0.6797 | 0.7420 | 0.7094 | | 0.0617 | 7.0 | 1715 | 0.4711 | 0.6702 | 0.7259 | 0.6969 | | 0.0617 | 8.0 | 1960 | 0.4904 | 0.6904 | 0.7505 | 0.7192 | | 0.0328 | 9.0 | 2205 | 0.5088 | 0.6591 | 0.7713 | 0.7108 | | 0.0328 | 10.0 | 2450 | 0.5709 | 0.6468 | 0.7788 | 0.7067 | | 0.019 | 11.0 | 2695 | 0.5570 | 0.6642 | 0.7533 | 0.7059 | | 0.019 | 12.0 | 2940 | 0.5574 | 0.6899 | 0.7656 | 0.7258 | | 0.0131 | 13.0 | 3185 | 0.5858 | 0.6952 | 0.7609 | 0.7265 | | 0.0131 | 14.0 | 3430 | 0.6239 | 0.6556 | 0.7826 | 0.7135 | | 0.0074 | 15.0 | 3675 | 0.5931 | 0.6825 | 0.7599 | 0.7191 | | 0.0074 | 16.0 | 3920 | 0.6364 | 0.6785 | 0.7580 | 0.7161 | | 0.005 | 17.0 | 4165 | 0.6437 | 0.6855 | 0.7580 | 0.7199 | | 0.005 | 18.0 | 4410 | 0.6610 | 0.6779 | 0.7599 | 0.7166 | | 0.0029 | 19.0 | 4655 | 0.6625 | 0.6853 | 0.7656 | 0.7232 | | 0.0029 | 20.0 | 4900 | 0.6645 | 0.6885 | 0.7665 | 0.7254 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1"], "model-index": [{"name": "roberta-base-finetuned-ner-kmeans-twitter", "results": []}]}
ArBert/roberta-base-finetuned-ner-kmeans-twitter
null
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
roberta-base-finetuned-ner-kmeans-twitter ========================================= This model is a fine-tuned version of ArBert/roberta-base-finetuned-ner on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.6645 * Precision: 0.6885 * Recall: 0.7665 * F1: 0.7254 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 20 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-ner-kmeans This model is a fine-tuned version of [ArBert/roberta-base-finetuned-ner](https://huggingface.co/ArBert/roberta-base-finetuned-ner) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0592 - Precision: 0.9559 - Recall: 0.9615 - F1: 0.9587 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.0248 | 1.0 | 878 | 0.0609 | 0.9507 | 0.9561 | 0.9534 | | 0.0163 | 2.0 | 1756 | 0.0640 | 0.9515 | 0.9578 | 0.9546 | | 0.0089 | 3.0 | 2634 | 0.0592 | 0.9559 | 0.9615 | 0.9587 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1"], "model-index": [{"name": "roberta-base-finetuned-ner-kmeans", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.955868544600939, "name": "Precision"}, {"type": "recall", "value": 0.9614658103513412, "name": "Recall"}, {"type": "f1", "value": 0.9586590074394953, "name": "F1"}]}]}]}
ArBert/roberta-base-finetuned-ner-kmeans
null
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #dataset-conll2003 #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
roberta-base-finetuned-ner-kmeans ================================= This model is a fine-tuned version of ArBert/roberta-base-finetuned-ner on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0592 * Precision: 0.9559 * Recall: 0.9615 * F1: 0.9587 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #dataset-conll2003 #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-ner This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0738 - Precision: 0.9232 - Recall: 0.9437 - F1: 0.9333 - Accuracy: 0.9825 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1397 | 1.0 | 1368 | 0.0957 | 0.9141 | 0.9048 | 0.9094 | 0.9753 | | 0.0793 | 2.0 | 2736 | 0.0728 | 0.9274 | 0.9324 | 0.9299 | 0.9811 | | 0.0499 | 3.0 | 4104 | 0.0738 | 0.9232 | 0.9437 | 0.9333 | 0.9825 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "roberta-base-finetuned-ner", "results": []}]}
ArBert/roberta-base-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
roberta-base-finetuned-ner ========================== This model is a fine-tuned version of roberta-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.0738 * Precision: 0.9232 * Recall: 0.9437 * F1: 0.9333 * Accuracy: 0.9825 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text-generation
transformers
# Stark DialoGPT Model
{"tags": ["conversational"]}
ArJakusz/DialoGPT-small-stark
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Stark DialoGPT Model
[ "# Stark DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Stark DialoGPT Model" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
Aran/DialoGPT-medium-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
Aran/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
text-generation
transformers
# Rick DialoGPT Model
{"tags": ["conversational"]}
Arcktosh/DialoGPT-small-rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Rick DialoGPT Model
[ "# Rick DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Rick DialoGPT Model" ]
text-generation
transformers
# Cultured Kumiko DialoGPT Model
{"tags": ["conversational"]}
AriakimTaiyo/DialoGPT-cultured-Kumiko
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Cultured Kumiko DialoGPT Model
[ "# Cultured Kumiko DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Cultured Kumiko DialoGPT Model" ]
text-generation
null
# Medium Kumiko DialoGPT Model
{"tags": ["conversational"]}
AriakimTaiyo/DialoGPT-medium-Kumiko
null
[ "conversational", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #conversational #region-us
# Medium Kumiko DialoGPT Model
[ "# Medium Kumiko DialoGPT Model" ]
[ "TAGS\n#conversational #region-us \n", "# Medium Kumiko DialoGPT Model" ]
text-generation
transformers
# Revised Kumiko DialoGPT Model
{"tags": ["conversational"]}
AriakimTaiyo/DialoGPT-revised-Kumiko
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Revised Kumiko DialoGPT Model
[ "# Revised Kumiko DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Revised Kumiko DialoGPT Model" ]
text-generation
transformers
# Kumiko DialoGPT Model
{"tags": ["conversational"]}
AriakimTaiyo/DialoGPT-small-Kumiko
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Kumiko DialoGPT Model
[ "# Kumiko DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Kumiko DialoGPT Model" ]
text-generation
transformers
# Rikka DialoGPT Model
{"tags": ["conversational"]}
AriakimTaiyo/DialoGPT-small-Rikka
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Rikka DialoGPT Model
[ "# Rikka DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Rikka DialoGPT Model" ]
null
null
a
{}
AriakimTaiyo/kumiko
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
a
[]
[ "TAGS\n#region-us \n" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-hausa2-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.2032 - Wer: 0.7237 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1683 | 12.49 | 400 | 1.0279 | 0.7211 | | 0.0995 | 24.98 | 800 | 1.2032 | 0.7237 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-hausa2-demo-colab", "results": []}]}
Arnold/wav2vec2-hausa2-demo-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-hausa2-demo-colab ========================== This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common\_voice dataset. It achieves the following results on the evaluation set: * Loss: 1.2032 * Wer: 0.7237 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-hausa2-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.2993 - Wer: 0.4826 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9.6e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 13 - gradient_accumulation_steps: 3 - total_train_batch_size: 36 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.1549 | 12.5 | 400 | 2.7289 | 1.0 | | 2.0566 | 25.0 | 800 | 0.4582 | 0.6768 | | 0.4423 | 37.5 | 1200 | 0.3037 | 0.5138 | | 0.2991 | 50.0 | 1600 | 0.2993 | 0.4826 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xlsr-hausa2-demo-colab", "results": []}]}
Arnold/wav2vec2-large-xlsr-hausa2-demo-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-large-xlsr-hausa2-demo-colab ===================================== This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common\_voice dataset. It achieves the following results on the evaluation set: * Loss: 0.2993 * Wer: 0.4826 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 9.6e-05 * train\_batch\_size: 12 * eval\_batch\_size: 8 * seed: 13 * gradient\_accumulation\_steps: 3 * total\_train\_batch\_size: 36 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 400 * num\_epochs: 50 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 9.6e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 8\n* seed: 13\n* gradient\\_accumulation\\_steps: 3\n* total\\_train\\_batch\\_size: 36\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 9.6e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 8\n* seed: 13\n* gradient\\_accumulation\\_steps: 3\n* total\\_train\\_batch\\_size: 36\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2295 - Accuracy: 0.92 - F1: 0.9202 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8187 | 1.0 | 250 | 0.3137 | 0.902 | 0.8983 | | 0.2514 | 2.0 | 500 | 0.2295 | 0.92 | 0.9202 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.92, "name": "Accuracy"}, {"type": "f1", "value": 0.9201604193183255, "name": "F1"}]}]}]}
Aron/distilbert-base-uncased-finetuned-emotion
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-emotion ========================================= This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set: * Loss: 0.2295 * Accuracy: 0.92 * F1: 0.9202 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text-generation
transformers
#Okarin Bot
{"tags": ["conversational"]}
ArtemisZealot/DialoGTP-small-Qkarin
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Okarin Bot
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
Aruden/DialoGPT-medium-harrypotterall
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
text2text-generation
transformers
``` from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("ArvinZhuang/BiTAG-t5-large") tokenizer = AutoTokenizer.from_pretrained("ArvinZhuang/BiTAG-t5-large") text = "abstract: [your abstract]" # use 'title:' as the prefix for title_to_abs task. input_ids = tokenizer.encode(text, return_tensors='pt') outputs = model.generate( input_ids, do_sample=True, max_length=500, top_p=0.9, top_k=20, temperature=1, num_return_sequences=10, ) print("Output:\n" + 100 * '-') for i, output in enumerate(outputs): print("{}: {}".format(i+1, tokenizer.decode(output, skip_special_tokens=True))) ``` GitHub: https://github.com/ArvinZhuang/BiTAG
{"inference": {"parameters": {"do_sample": true, "max_length": 500, "top_p": 0.9, "top_k": 20, "temperature": 1, "num_return_sequences": 10}}, "widget": [{"text": "abstract: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).", "example_title": "BERT abstract"}]}
ielabgroup/BiTAG-t5-large
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
GitHub: URL
[]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text2text-generation
transformers
# Model Trained Using AutoNLP - Model: Google's Pegasus (https://huggingface.co/google/pegasus-xsum) - Problem type: Summarization - Model ID: 34558227 - CO2 Emissions (in grams): 137.60574081887984 - Spaces: https://huggingface.co/spaces/TitleGenerators/ArxivTitleGenerator - Dataset: arXiv Dataset (https://www.kaggle.com/Cornell-University/arxiv) - Data subset used: https://huggingface.co/datasets/AryanLala/autonlp-data-Scientific_Title_Generator ## Validation Metrics - Loss: 2.578599214553833 - Rouge1: 44.8482 - Rouge2: 24.4052 - RougeL: 40.1716 - RougeLsum: 40.1396 - Gen Len: 11.4675 ## Social - LinkedIn: https://www.linkedin.com/in/aryanlala/ - Twitter: https://twitter.com/AryanLala20 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/AryanLala/autonlp-Scientific_Title_Generator-34558227 ```
{"language": "en", "tags": "autonlp", "datasets": ["AryanLala/autonlp-data-Scientific_Title_Generator"], "widget": [{"text": "The scale, variety, and quantity of publicly-available NLP datasets has grown rapidly as researchers propose new tasks, larger models, and novel benchmarks. Datasets is a community library for contemporary NLP designed to support this ecosystem. Datasets aims to standardize end-user interfaces, versioning, and documentation, while providing a lightweight front-end that behaves similarly for small datasets as for internet-scale corpora. The design of the library incorporates a distributed, community-driven approach to adding datasets and documenting usage. After a year of development, the library now includes more than 650 unique datasets, has more than 250 contributors, and has helped support a variety of novel cross-dataset research projects and shared tasks. The library is available at https://github.com/huggingface/datasets."}], "co2_eq_emissions": 137.60574081887984}
AryanLala/autonlp-Scientific_Title_Generator-34558227
null
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autonlp", "en", "dataset:AryanLala/autonlp-data-Scientific_Title_Generator", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #pegasus #text2text-generation #autonlp #en #dataset-AryanLala/autonlp-data-Scientific_Title_Generator #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #region-us
# Model Trained Using AutoNLP - Model: Google's Pegasus (URL - Problem type: Summarization - Model ID: 34558227 - CO2 Emissions (in grams): 137.60574081887984 - Spaces: URL - Dataset: arXiv Dataset (URL - Data subset used: URL ## Validation Metrics - Loss: 2.578599214553833 - Rouge1: 44.8482 - Rouge2: 24.4052 - RougeL: 40.1716 - RougeLsum: 40.1396 - Gen Len: 11.4675 ## Social - LinkedIn: URL - Twitter: URL ## Usage You can use cURL to access this model:
[ "# Model Trained Using AutoNLP\n- Model: Google's Pegasus (URL\n- Problem type: Summarization\n- Model ID: 34558227\n- CO2 Emissions (in grams): 137.60574081887984\n- Spaces: URL\n- Dataset: arXiv Dataset (URL\n- Data subset used: URL", "## Validation Metrics\n\n- Loss: 2.578599214553833\n- Rouge1: 44.8482\n- Rouge2: 24.4052\n- RougeL: 40.1716\n- RougeLsum: 40.1396\n- Gen Len: 11.4675", "## Social\n- LinkedIn: URL\n- Twitter: URL", "## Usage\n\nYou can use cURL to access this model:" ]
[ "TAGS\n#transformers #pytorch #pegasus #text2text-generation #autonlp #en #dataset-AryanLala/autonlp-data-Scientific_Title_Generator #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Model Trained Using AutoNLP\n- Model: Google's Pegasus (URL\n- Problem type: Summarization\n- Model ID: 34558227\n- CO2 Emissions (in grams): 137.60574081887984\n- Spaces: URL\n- Dataset: arXiv Dataset (URL\n- Data subset used: URL", "## Validation Metrics\n\n- Loss: 2.578599214553833\n- Rouge1: 44.8482\n- Rouge2: 24.4052\n- RougeL: 40.1716\n- RougeLsum: 40.1396\n- Gen Len: 11.4675", "## Social\n- LinkedIn: URL\n- Twitter: URL", "## Usage\n\nYou can use cURL to access this model:" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-parsbert-uncased-finetuned This model is a fine-tuned version of [HooshvareLab/bert-base-parsbert-uncased](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2045 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.5596 | 1.0 | 515 | 3.2097 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"]}
Ashkanmh/bert-base-parsbert-uncased-finetuned
null
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
bert-base-parsbert-uncased-finetuned ==================================== This model is a fine-tuned version of HooshvareLab/bert-base-parsbert-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 3.2045 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 64 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.10.0 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
text-generation
transformers
A discord chatbot trained on the whole LiS script to simulate character speech
{"tags": ["conversational"]}
Aspect11/DialoGPT-Medium-LiSBot
null
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
A discord chatbot trained on the whole LiS script to simulate character speech
[]
[ "TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
# RinTohsaka bot
{"tags": ["conversational"]}
Asuramaru/DialoGPT-small-rintohsaka
null
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# RinTohsaka bot
[ "# RinTohsaka bot" ]
[ "TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# RinTohsaka bot" ]
text-generation
transformers
GPT-Glacier, a GPT-Neo 125M model finetuned on the Glacier2 Modding Discord server.
{}
Atampy26/GPT-Glacier
null
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us
GPT-Glacier, a GPT-Neo 125M model finetuned on the Glacier2 Modding Discord server.
[]
[ "TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
transformers
# Michael Scott DialoGPT Model
{"tags": ["conversational"]}
Atchuth/DialoGPT-small-MichaelBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Michael Scott DialoGPT Model
[ "# Michael Scott DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Michael Scott DialoGPT Model" ]
null
null
Placeholder
{}
Atlasky/Turkish-Negator
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
Placeholder
[]
[ "TAGS\n#region-us \n" ]
text-generation
transformers
#MyAwesomeModel
{"tags": ["conversational"]}
Augustvember/WOKKAWOKKA
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#MyAwesomeModel
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
#MyAwesomeModel
{"tags": ["conversational"]}
Augustvember/test
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#MyAwesomeModel
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
#MyAwesomeModel
{"tags": ["conversational"]}
Augustvember/wokka5
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#MyAwesomeModel
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
#MyAwesomeModel
{"tags": ["conversational"]}
Augustvember/wokkabottest2
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#MyAwesomeModel
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
null
null
https://www.geogebra.org/m/bbuczchu https://www.geogebra.org/m/xwyasqje https://www.geogebra.org/m/mx2cqkwr https://www.geogebra.org/m/tkqqqthm https://www.geogebra.org/m/asdaf9mj https://www.geogebra.org/m/ywuaj7p5 https://www.geogebra.org/m/jkfkayj3 https://www.geogebra.org/m/hptnn7ar https://www.geogebra.org/m/de9cwmrf https://www.geogebra.org/m/yjc5hdep https://www.geogebra.org/m/nm8r56w5 https://www.geogebra.org/m/j7wfcpxj
{}
Aurora/asdawd
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
URL URL URL URL URL URL URL URL URL URL URL URL
[]
[ "TAGS\n#region-us \n" ]
null
null
https://community.afpglobal.org/network/members/profile?UserKey=b0b38adc-86c7-4d30-85c6-ac7d15c5eeb0 https://community.afpglobal.org/network/members/profile?UserKey=f4ddef89-b508-4695-9d1e-3d4d1a583279 https://community.afpglobal.org/network/members/profile?UserKey=36081479-5e7b-41ba-8370-ecf72989107a https://community.afpglobal.org/network/members/profile?UserKey=e1a88332-be7f-4997-af4e-9fcb7bb366da https://community.afpglobal.org/network/members/profile?UserKey=4738b405-2017-4025-9e5f-eadbf7674840 https://community.afpglobal.org/network/members/profile?UserKey=eb96d91c-31ae-46e1-8297-a3c8551f2e6a https://u.mpi.org/network/members/profile?UserKey=9867e2d9-d22a-4dab-8bcf-3da5c2f30745 https://u.mpi.org/network/members/profile?UserKey=5af232f2-a66e-438f-a5ab-9768321f791d https://community.afpglobal.org/network/members/profile?UserKey=481305df-48ea-4c50-bca4-a82008efb427 https://u.mpi.org/network/members/profile?UserKey=039fbb91-52c6-40aa-b58d-432fb4081e32 https://www.geogebra.org/m/jkfkayj3 https://www.geogebra.org/m/hptnn7ar https://www.geogebra.org/m/de9cwmrf https://www.geogebra.org/m/yjc5hdep https://www.geogebra.org/m/nm8r56w5 https://www.geogebra.org/m/j7wfcpxj https://www.geogebra.org/m/bbuczchu https://www.geogebra.org/m/xwyasqje https://www.geogebra.org/m/mx2cqkwr https://www.geogebra.org/m/tkqqqthm https://www.geogebra.org/m/asdaf9mj https://www.geogebra.org/m/ywuaj7p5
{}
Aurora/community.afpglobal
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
URL URL URL URL URL URL https://u.URL https://u.URL URL https://u.URL URL URL URL URL URL URL URL URL URL URL URL URL
[]
[ "TAGS\n#region-us \n" ]
text-generation
transformers
# Blitzo DialoGPT Model
{"tags": ["conversational"]}
AvatarXD/DialoGPT-medium-Blitzo
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Blitzo DialoGPT Model
[ "# Blitzo DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Blitzo DialoGPT Model" ]