pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | transformers | # distilrubert-tiny-cased-conversational
Conversational DistilRuBERT-tiny \(Russian, cased, 3‑layers, 264‑hidden, 12‑heads, 10.4M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\] (as [Conversational RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)). It can be considered as tiny copy of [Conversational DistilRuBERT-small](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational).
Our DistilRuBERT-tiny is highly inspired by \[3\], \[4\] and architecture is very close to \[5\]. Namely, we use
* MLM loss (between token labels and student output distribution)
* MSE loss (between averaged student and teacher hidden states)
The key features are:
* unlike most of distilled language models, we **didn't** use KL loss during pre-training
* reduced vocabulary size (30K in *tiny* vs. 100K in *base* and *small* )
* two separate inputs for student: tokens obtained using student tokenizer (for MLM) and teacher tokens greedily splitted by student tokens (for MSE)
Here is comparison between teacher model (`Conversational RuBERT`) and other distilled models.
| Model name | \# params, M | \# vocab, K | Mem., MB |
|---|---|---|---|
| `rubert-base-cased-conversational` | 177.9 | 120 | 679 |
| `distilrubert-base-cased-conversational` | 135.5 | 120 | 517 |
| `distilrubert-small-cased-conversational` | 107.1 | 120 | 409 |
| `cointegrated/rubert-tiny` | 11.8 | **30** | 46 |
| **distilrubert-tiny-cased-conversational** | **10.4** | 31 | **41** |
DistilRuBERT-tiny was trained for about 100 hrs. on 7 nVIDIA Tesla P100-SXM2.0 16Gb.
We used `PyTorchBenchmark` from `transformers` to evaluate model's performance and compare it with other pre-trained language models for Russian. All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA Tesla P100-SXM2.0 16Gb.
| Model name | Batch size | Seq len | Time, s || Mem, MB ||
|---|---|---|------||------||
| | | | CPU | GPU | CPU | GPU |
| `rubert-base-cased-conversational` | 1 | 512 | 0.147 | 0.014 | 897 | 1531 |
| `distilrubert-base-cased-conversational` | 1 | 512 | 0.083 | 0.006 | 766 | 1423 |
| `distilrubert-small-cased-conversational` | 1 | 512 | 0.03 | **0.002** | 600 | 1243 |
| `cointegrated/rubert-tiny` | 1 | 512 | 0.041 | 0.003 | 272 | 919 |
| **distilrubert-tiny-cased-conversational** | 1 | 512 | **0.023** | 0.003 | **206** | **855** |
| `rubert-base-cased-conversational` | 16 | 512 | 2.839 | 0.182 | 1499 | 2071 |
| `distilrubert-base-cased-conversational` | 16 | 512 | 1.065 | 0.055 | 2541 | 2927 |
| `distilrubert-small-cased-conversational` | 16 | 512 | 0.373 | **0.003** | 1360 | 1943 |
| `cointegrated/rubert-tiny` | 16 | 512 | 0.628 | 0.004 | 1293 | 2221 |
| **distilrubert-tiny-cased-conversational** | 16 | 512 | **0.219** | **0.003** | **633** | **1291** |
To evaluate model quality, we fine-tuned DistilRuBERT-tiny on classification (RuSentiment, ParaPhraser), NER and question answering data sets for Russian and obtained scores very similar to the [Conversational DistilRuBERT-small](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational).
# Citation
If you found the model useful for your research, we are kindly ask to cite [this](https://arxiv.org/abs/2205.02340) paper:
```
@misc{https://doi.org/10.48550/arxiv.2205.02340,
doi = {10.48550/ARXIV.2205.02340},
url = {https://arxiv.org/abs/2205.02340},
author = {Kolesnikova, Alina and Kuratov, Yuri and Konovalov, Vasily and Burtsev, Mikhail},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Knowledge Distillation of Russian Language Models with Reduction of Vocabulary},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
\[3\]: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. \(2019\). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
\[4\]: <https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation>
\[5\]: <https://habr.com/ru/post/562064/>, <https://huggingface.co/cointegrated/rubert-tiny> | {"language": ["ru"]} | DeepPavlov/distilrubert-tiny-cased-conversational-v1 | null | [
"transformers",
"pytorch",
"distilbert",
"ru",
"arxiv:2205.02340",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2205.02340"
] | [
"ru"
] | TAGS
#transformers #pytorch #distilbert #ru #arxiv-2205.02340 #endpoints_compatible #region-us
| distilrubert-tiny-cased-conversational
======================================
Conversational DistilRuBERT-tiny (Russian, cased, 3‑layers, 264‑hidden, 12‑heads, 10.4M parameters) was trained on OpenSubtitles[1], Dirty, Pikabu, and a Social Media segment of Taiga corpus[2] (as Conversational RuBERT). It can be considered as tiny copy of Conversational DistilRuBERT-small.
Our DistilRuBERT-tiny is highly inspired by [3], [4] and architecture is very close to [5]. Namely, we use
* MLM loss (between token labels and student output distribution)
* MSE loss (between averaged student and teacher hidden states)
The key features are:
* unlike most of distilled language models, we didn't use KL loss during pre-training
* reduced vocabulary size (30K in *tiny* vs. 100K in *base* and *small* )
* two separate inputs for student: tokens obtained using student tokenizer (for MLM) and teacher tokens greedily splitted by student tokens (for MSE)
Here is comparison between teacher model ('Conversational RuBERT') and other distilled models.
DistilRuBERT-tiny was trained for about 100 hrs. on 7 nVIDIA Tesla P100-SXM2.0 16Gb.
We used 'PyTorchBenchmark' from 'transformers' to evaluate model's performance and compare it with other pre-trained language models for Russian. All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA Tesla P100-SXM2.0 16Gb.
| Model name | Batch size | Seq len | Time, s || Mem, MB ||
|---|---|---|------||------||
| | | | CPU | GPU | CPU | GPU |
| 'rubert-base-cased-conversational' | 1 | 512 | 0.147 | 0.014 | 897 | 1531 |
| 'distilrubert-base-cased-conversational' | 1 | 512 | 0.083 | 0.006 | 766 | 1423 |
| 'distilrubert-small-cased-conversational' | 1 | 512 | 0.03 | 0.002 | 600 | 1243 |
| 'cointegrated/rubert-tiny' | 1 | 512 | 0.041 | 0.003 | 272 | 919 |
| distilrubert-tiny-cased-conversational | 1 | 512 | 0.023 | 0.003 | 206 | 855 |
| 'rubert-base-cased-conversational' | 16 | 512 | 2.839 | 0.182 | 1499 | 2071 |
| 'distilrubert-base-cased-conversational' | 16 | 512 | 1.065 | 0.055 | 2541 | 2927 |
| 'distilrubert-small-cased-conversational' | 16 | 512 | 0.373 | 0.003 | 1360 | 1943 |
| 'cointegrated/rubert-tiny' | 16 | 512 | 0.628 | 0.004 | 1293 | 2221 |
| distilrubert-tiny-cased-conversational | 16 | 512 | 0.219 | 0.003 | 633 | 1291 |
To evaluate model quality, we fine-tuned DistilRuBERT-tiny on classification (RuSentiment, ParaPhraser), NER and question answering data sets for Russian and obtained scores very similar to the Conversational DistilRuBERT-small.
If you found the model useful for your research, we are kindly ask to cite this paper:
[1]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)
[2]: Shavrina T., Shapovalova O. (2017) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
[3]: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
[4]: <URL
[5]: <URL <URL
| [] | [
"TAGS\n#transformers #pytorch #distilbert #ru #arxiv-2205.02340 #endpoints_compatible #region-us \n"
] |
null | transformers | WARNING: This is `distilrubert-small-cased-conversational` model uploaded with wrong name. This one is the same as [distilrubert-small-cased-conversational](https://huggingface.co/DeepPavlov/distilrubert-small-cased-conversational). `distilrubert-tiny-cased-conversational` could be found in [distilrubert-tiny-cased-conversational-v1](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational-v1).
# distilrubert-small-cased-conversational
Conversational DistilRuBERT-small \(Russian, cased, 2‑layer, 768‑hidden, 12‑heads, 107M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\] (as [Conversational RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)). It can be considered as small copy of [Conversational DistilRuBERT-base](https://huggingface.co/DeepPavlov/distilrubert-base-cased-conversational).
Our DistilRuBERT-small was highly inspired by \[3\], \[4\]. Namely, we used
* KL loss (between teacher and student output logits)
* MLM loss (between tokens labels and student output logits)
* Cosine embedding loss (between averaged six consecutive hidden states from teacher's encoder and one hidden state of the student)
* MSE loss (between averaged six consecutive attention maps from teacher's encoder and one attention map of the student)
The model was trained for about 80 hrs. on 8 nVIDIA Tesla P100-SXM2.0 16Gb.
To evaluate improvements in the inference speed, we ran teacher and student models on random sequences with seq_len=512, batch_size = 16 (for throughput) and batch_size=1 (for latency).
All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA Tesla P100-SXM2.0 16Gb.
| Model | Size, Mb. | CPU latency, sec.| GPU latency, sec. | CPU throughput, samples/sec. | GPU throughput, samples/sec. |
|-------------------------------------------------|------------|------------------|-------------------|------------------------------|------------------------------|
| Teacher (RuBERT-base-cased-conversational) | 679 | 0.655 | 0.031 | 0.3754 | 36.4902 |
| Student (DistilRuBERT-small-cased-conversational)| 409 | 0.1656 | 0.015 | 0.9692 | 71.3553 |
To evaluate model quality, we fine-tuned DistilRuBERT-small on classification, NER and question answering tasks. Scores and archives with fine-tuned models can be found in [DeepPavlov docs](http://docs.deeppavlov.ai/en/master/features/overview.html#models).
# Citation
If you found the model useful for your research, we are kindly ask to cite [this](https://arxiv.org/abs/2205.02340) paper:
```
@misc{https://doi.org/10.48550/arxiv.2205.02340,
doi = {10.48550/ARXIV.2205.02340},
url = {https://arxiv.org/abs/2205.02340},
author = {Kolesnikova, Alina and Kuratov, Yuri and Konovalov, Vasily and Burtsev, Mikhail},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Knowledge Distillation of Russian Language Models with Reduction of Vocabulary},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
\[3\]: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. \(2019\). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
\[4\]: <https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation> | {"language": ["ru"]} | DeepPavlov/distilrubert-tiny-cased-conversational | null | [
"transformers",
"pytorch",
"distilbert",
"ru",
"arxiv:2205.02340",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2205.02340"
] | [
"ru"
] | TAGS
#transformers #pytorch #distilbert #ru #arxiv-2205.02340 #endpoints_compatible #region-us
| WARNING: This is 'distilrubert-small-cased-conversational' model uploaded with wrong name. This one is the same as distilrubert-small-cased-conversational. 'distilrubert-tiny-cased-conversational' could be found in distilrubert-tiny-cased-conversational-v1.
distilrubert-small-cased-conversational
=======================================
Conversational DistilRuBERT-small (Russian, cased, 2‑layer, 768‑hidden, 12‑heads, 107M parameters) was trained on OpenSubtitles[1], Dirty, Pikabu, and a Social Media segment of Taiga corpus[2] (as Conversational RuBERT). It can be considered as small copy of Conversational DistilRuBERT-base.
Our DistilRuBERT-small was highly inspired by [3], [4]. Namely, we used
* KL loss (between teacher and student output logits)
* MLM loss (between tokens labels and student output logits)
* Cosine embedding loss (between averaged six consecutive hidden states from teacher's encoder and one hidden state of the student)
* MSE loss (between averaged six consecutive attention maps from teacher's encoder and one attention map of the student)
The model was trained for about 80 hrs. on 8 nVIDIA Tesla P100-SXM2.0 16Gb.
To evaluate improvements in the inference speed, we ran teacher and student models on random sequences with seq\_len=512, batch\_size = 16 (for throughput) and batch\_size=1 (for latency).
All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA Tesla P100-SXM2.0 16Gb.
To evaluate model quality, we fine-tuned DistilRuBERT-small on classification, NER and question answering tasks. Scores and archives with fine-tuned models can be found in DeepPavlov docs.
If you found the model useful for your research, we are kindly ask to cite this paper:
[1]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)
[2]: Shavrina T., Shapovalova O. (2017) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
[3]: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
[4]: <URL
| [] | [
"TAGS\n#transformers #pytorch #distilbert #ru #arxiv-2205.02340 #endpoints_compatible #region-us \n"
] |
text-classification | transformers | # RoBERTa Large model fine-tuned on Winogrande
This model was fine-tuned on Winogrande dataset (XL size) in sequence classification task format, meaning that original pairs of sentences
with corresponding options filled in were separated, shuffled and classified independently of each other.
## Model description
## Intended use & limitations
### How to use
## Training data
[WinoGrande-XL](https://huggingface.co/datasets/winogrande) reformatted the following way:
1. Each sentence was split on "`_`" placeholder symbol.
2. Each option was concatenated with the second part of the split, thus transforming each example into two text segment pairs.
3. Text segment pairs corresponding to correct and incorrect options were marked with `True` and `False` labels accordingly.
4. Text segment pairs were shuffled thereafter.
For example,
```json
{
"answer": "2",
"option1": "plant",
"option2": "urn",
"sentence": "The plant took up too much room in the urn, because the _ was small."
}
```
becomes
```json
{
"sentence1": "The plant took up too much room in the urn, because the ",
"sentence2": "plant was small.",
"label": false
}
```
and
```json
{
"sentence1": "The plant took up too much room in the urn, because the ",
"sentence2": "urn was small.",
"label": true
}
```
These sentence pairs are then treated as independent examples.
### BibTeX entry and citation info
```bibtex
@article{sakaguchi2019winogrande,
title={WinoGrande: An Adversarial Winograd Schema Challenge at Scale},
author={Sakaguchi, Keisuke and Bras, Ronan Le and Bhagavatula, Chandra and Choi, Yejin},
journal={arXiv preprint arXiv:1907.10641},
year={2019}
}
@article{DBLP:journals/corr/abs-1907-11692,
author = {Yinhan Liu and
Myle Ott and
Naman Goyal and
Jingfei Du and
Mandar Joshi and
Danqi Chen and
Omer Levy and
Mike Lewis and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach},
journal = {CoRR},
volume = {abs/1907.11692},
year = {2019},
url = {http://arxiv.org/abs/1907.11692},
archivePrefix = {arXiv},
eprint = {1907.11692},
timestamp = {Thu, 01 Aug 2019 08:59:33 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | {"language": ["en"], "datasets": ["winogrande"], "widget": [{"text": "The roof of Rachel's home is old and falling apart, while Betty's is new. The home value of </s> Rachel is lower."}, {"text": "The wooden doors at my friends work are worse than the wooden desks at my work, because the </s> desks material is cheaper."}, {"text": "Postal Service were to reduce delivery frequency. </s> The postal service could deliver less frequently."}, {"text": "I put the cake away in the refrigerator. It has a lot of butter in it. </s> The cake has a lot of butter in it."}]} | DeepPavlov/roberta-large-winogrande | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:winogrande",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1907.11692"
] | [
"en"
] | TAGS
#transformers #pytorch #roberta #text-classification #en #dataset-winogrande #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us
| # RoBERTa Large model fine-tuned on Winogrande
This model was fine-tuned on Winogrande dataset (XL size) in sequence classification task format, meaning that original pairs of sentences
with corresponding options filled in were separated, shuffled and classified independently of each other.
## Model description
## Intended use & limitations
### How to use
## Training data
WinoGrande-XL reformatted the following way:
1. Each sentence was split on "'_'" placeholder symbol.
2. Each option was concatenated with the second part of the split, thus transforming each example into two text segment pairs.
3. Text segment pairs corresponding to correct and incorrect options were marked with 'True' and 'False' labels accordingly.
4. Text segment pairs were shuffled thereafter.
For example,
becomes
and
These sentence pairs are then treated as independent examples.
### BibTeX entry and citation info
| [
"# RoBERTa Large model fine-tuned on Winogrande\n\nThis model was fine-tuned on Winogrande dataset (XL size) in sequence classification task format, meaning that original pairs of sentences\nwith corresponding options filled in were separated, shuffled and classified independently of each other.",
"## Model description",
"## Intended use & limitations",
"### How to use",
"## Training data\n\nWinoGrande-XL reformatted the following way:\n1. Each sentence was split on \"'_'\" placeholder symbol.\n2. Each option was concatenated with the second part of the split, thus transforming each example into two text segment pairs.\n3. Text segment pairs corresponding to correct and incorrect options were marked with 'True' and 'False' labels accordingly.\n4. Text segment pairs were shuffled thereafter.\n\nFor example,\n\n\n\nbecomes\n\n\n\nand\n\n\nThese sentence pairs are then treated as independent examples.",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #roberta #text-classification #en #dataset-winogrande #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us \n",
"# RoBERTa Large model fine-tuned on Winogrande\n\nThis model was fine-tuned on Winogrande dataset (XL size) in sequence classification task format, meaning that original pairs of sentences\nwith corresponding options filled in were separated, shuffled and classified independently of each other.",
"## Model description",
"## Intended use & limitations",
"### How to use",
"## Training data\n\nWinoGrande-XL reformatted the following way:\n1. Each sentence was split on \"'_'\" placeholder symbol.\n2. Each option was concatenated with the second part of the split, thus transforming each example into two text segment pairs.\n3. Text segment pairs corresponding to correct and incorrect options were marked with 'True' and 'False' labels accordingly.\n4. Text segment pairs were shuffled thereafter.\n\nFor example,\n\n\n\nbecomes\n\n\n\nand\n\n\nThese sentence pairs are then treated as independent examples.",
"### BibTeX entry and citation info"
] |
feature-extraction | transformers |
# rubert-base-cased-conversational
Conversational RuBERT \(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\]. We assembled a new vocabulary for Conversational RuBERT model on this data and initialized the model with [RuBERT](../rubert-base-cased).
08.11.2021: upload model with MLM and NSP heads
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
| {"language": ["ru"]} | DeepPavlov/rubert-base-cased-conversational | null | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"ru",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru"
] | TAGS
#transformers #pytorch #jax #bert #feature-extraction #ru #endpoints_compatible #has_space #region-us
|
# rubert-base-cased-conversational
Conversational RuBERT \(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on OpenSubtitles\[1\], Dirty, Pikabu, and a Social Media segment of Taiga corpus\[2\]. We assembled a new vocabulary for Conversational RuBERT model on this data and initialized the model with RuBERT.
08.11.2021: upload model with MLM and NSP heads
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
| [
"# rubert-base-cased-conversational\n\nConversational RuBERT \\(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\\) was trained on OpenSubtitles\\[1\\], Dirty, Pikabu, and a Social Media segment of Taiga corpus\\[2\\]. We assembled a new vocabulary for Conversational RuBERT model on this data and initialized the model with RuBERT.\n\n08.11.2021: upload model with MLM and NSP heads\n\n\\[1\\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \\(LREC 2016\\)\n\n\\[2\\]: Shavrina T., Shapovalova O. \\(2017\\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017."
] | [
"TAGS\n#transformers #pytorch #jax #bert #feature-extraction #ru #endpoints_compatible #has_space #region-us \n",
"# rubert-base-cased-conversational\n\nConversational RuBERT \\(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\\) was trained on OpenSubtitles\\[1\\], Dirty, Pikabu, and a Social Media segment of Taiga corpus\\[2\\]. We assembled a new vocabulary for Conversational RuBERT model on this data and initialized the model with RuBERT.\n\n08.11.2021: upload model with MLM and NSP heads\n\n\\[1\\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \\(LREC 2016\\)\n\n\\[2\\]: Shavrina T., Shapovalova O. \\(2017\\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017."
] |
feature-extraction | transformers |
# rubert-base-cased-sentence
Sentence RuBERT \(Russian, cased, 12-layer, 768-hidden, 12-heads, 180M parameters\) is a representation‑based sentence encoder for Russian. It is initialized with RuBERT and fine‑tuned on SNLI\[1\] google-translated to russian and on russian part of XNLI dev set\[2\]. Sentence representations are mean pooled token embeddings in the same manner as in Sentence‑BERT\[3\].
\[1\]: S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning. \(2015\) A large annotated corpus for learning natural language inference. arXiv preprint [arXiv:1508.05326](https://arxiv.org/abs/1508.05326)
\[2\]: Williams A., Bowman S. \(2018\) XNLI: Evaluating Cross-lingual Sentence Representations. arXiv preprint [arXiv:1809.05053](https://arxiv.org/abs/1809.05053)
\[3\]: N. Reimers, I. Gurevych \(2019\) Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. arXiv preprint [arXiv:1908.10084](https://arxiv.org/abs/1908.10084)
| {"language": ["ru"]} | DeepPavlov/rubert-base-cased-sentence | null | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"ru",
"arxiv:1508.05326",
"arxiv:1809.05053",
"arxiv:1908.10084",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1508.05326",
"1809.05053",
"1908.10084"
] | [
"ru"
] | TAGS
#transformers #pytorch #jax #bert #feature-extraction #ru #arxiv-1508.05326 #arxiv-1809.05053 #arxiv-1908.10084 #endpoints_compatible #has_space #region-us
|
# rubert-base-cased-sentence
Sentence RuBERT \(Russian, cased, 12-layer, 768-hidden, 12-heads, 180M parameters\) is a representation‑based sentence encoder for Russian. It is initialized with RuBERT and fine‑tuned on SNLI\[1\] google-translated to russian and on russian part of XNLI dev set\[2\]. Sentence representations are mean pooled token embeddings in the same manner as in Sentence‑BERT\[3\].
\[1\]: S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning. \(2015\) A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326
\[2\]: Williams A., Bowman S. \(2018\) XNLI: Evaluating Cross-lingual Sentence Representations. arXiv preprint arXiv:1809.05053
\[3\]: N. Reimers, I. Gurevych \(2019\) Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. arXiv preprint arXiv:1908.10084
| [
"# rubert-base-cased-sentence\n\nSentence RuBERT \\(Russian, cased, 12-layer, 768-hidden, 12-heads, 180M parameters\\) is a representation‑based sentence encoder for Russian. It is initialized with RuBERT and fine‑tuned on SNLI\\[1\\] google-translated to russian and on russian part of XNLI dev set\\[2\\]. Sentence representations are mean pooled token embeddings in the same manner as in Sentence‑BERT\\[3\\].\n\n\n\\[1\\]: S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning. \\(2015\\) A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326\n\n\\[2\\]: Williams A., Bowman S. \\(2018\\) XNLI: Evaluating Cross-lingual Sentence Representations. arXiv preprint arXiv:1809.05053\n\n\\[3\\]: N. Reimers, I. Gurevych \\(2019\\) Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. arXiv preprint arXiv:1908.10084"
] | [
"TAGS\n#transformers #pytorch #jax #bert #feature-extraction #ru #arxiv-1508.05326 #arxiv-1809.05053 #arxiv-1908.10084 #endpoints_compatible #has_space #region-us \n",
"# rubert-base-cased-sentence\n\nSentence RuBERT \\(Russian, cased, 12-layer, 768-hidden, 12-heads, 180M parameters\\) is a representation‑based sentence encoder for Russian. It is initialized with RuBERT and fine‑tuned on SNLI\\[1\\] google-translated to russian and on russian part of XNLI dev set\\[2\\]. Sentence representations are mean pooled token embeddings in the same manner as in Sentence‑BERT\\[3\\].\n\n\n\\[1\\]: S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning. \\(2015\\) A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326\n\n\\[2\\]: Williams A., Bowman S. \\(2018\\) XNLI: Evaluating Cross-lingual Sentence Representations. arXiv preprint arXiv:1809.05053\n\n\\[3\\]: N. Reimers, I. Gurevych \\(2019\\) Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. arXiv preprint arXiv:1908.10084"
] |
feature-extraction | transformers |
# rubert-base-cased
RuBERT \(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on the Russian part of Wikipedia and news data. We used this training data to build a vocabulary of Russian subtokens and took a multilingual version of BERT‑base as an initialization for RuBERT\[1\].
08.11.2021: upload model with MLM and NSP heads
\[1\]: Kuratov, Y., Arkhipov, M. \(2019\). Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language. arXiv preprint [arXiv:1905.07213](https://arxiv.org/abs/1905.07213).
| {"language": ["ru"]} | DeepPavlov/rubert-base-cased | null | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"ru",
"arxiv:1905.07213",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1905.07213"
] | [
"ru"
] | TAGS
#transformers #pytorch #jax #bert #feature-extraction #ru #arxiv-1905.07213 #endpoints_compatible #has_space #region-us
|
# rubert-base-cased
RuBERT \(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on the Russian part of Wikipedia and news data. We used this training data to build a vocabulary of Russian subtokens and took a multilingual version of BERT‑base as an initialization for RuBERT\[1\].
08.11.2021: upload model with MLM and NSP heads
\[1\]: Kuratov, Y., Arkhipov, M. \(2019\). Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language. arXiv preprint arXiv:1905.07213.
| [
"# rubert-base-cased\n\nRuBERT \\(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\\) was trained on the Russian part of Wikipedia and news data. We used this training data to build a vocabulary of Russian subtokens and took a multilingual version of BERT‑base as an initialization for RuBERT\\[1\\].\n\n08.11.2021: upload model with MLM and NSP heads\n\n\\[1\\]: Kuratov, Y., Arkhipov, M. \\(2019\\). Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language. arXiv preprint arXiv:1905.07213."
] | [
"TAGS\n#transformers #pytorch #jax #bert #feature-extraction #ru #arxiv-1905.07213 #endpoints_compatible #has_space #region-us \n",
"# rubert-base-cased\n\nRuBERT \\(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\\) was trained on the Russian part of Wikipedia and news data. We used this training data to build a vocabulary of Russian subtokens and took a multilingual version of BERT‑base as an initialization for RuBERT\\[1\\].\n\n08.11.2021: upload model with MLM and NSP heads\n\n\\[1\\]: Kuratov, Y., Arkhipov, M. \\(2019\\). Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language. arXiv preprint arXiv:1905.07213."
] |
text-classification | transformers |
# XLM-RoBERTa-Large-En-Ru-MNLI
xlm-roberta-large-en-ru finetuned on mnli. | {"language": ["en", "ru"], "tags": ["xlm-roberta", "xlm-roberta-large", "xlm-roberta-large-en-ru", "xlm-roberta-large-en-ru-mnli"], "datasets": ["glue", "mnli"], "model_index": [{"name": "mnli", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE MNLI", "type": "glue", "args": "mnli"}}]}], "widget": [{"text": "\u041b\u044e\u0431\u043b\u044e \u0442\u0435\u0431\u044f. \u041d\u0435\u043d\u0430\u0432\u0438\u0436\u0443 \u0442\u0435\u0431\u044f"}, {"text": "I love you. I hate you"}]} | DeepPavlov/xlm-roberta-large-en-ru-mnli | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"xlm-roberta-large",
"xlm-roberta-large-en-ru",
"xlm-roberta-large-en-ru-mnli",
"en",
"ru",
"dataset:glue",
"dataset:mnli",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"ru"
] | TAGS
#transformers #pytorch #xlm-roberta #text-classification #xlm-roberta-large #xlm-roberta-large-en-ru #xlm-roberta-large-en-ru-mnli #en #ru #dataset-glue #dataset-mnli #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# XLM-RoBERTa-Large-En-Ru-MNLI
xlm-roberta-large-en-ru finetuned on mnli. | [
"# XLM-RoBERTa-Large-En-Ru-MNLI\n\nxlm-roberta-large-en-ru finetuned on mnli."
] | [
"TAGS\n#transformers #pytorch #xlm-roberta #text-classification #xlm-roberta-large #xlm-roberta-large-en-ru #xlm-roberta-large-en-ru-mnli #en #ru #dataset-glue #dataset-mnli #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# XLM-RoBERTa-Large-En-Ru-MNLI\n\nxlm-roberta-large-en-ru finetuned on mnli."
] |
feature-extraction | transformers |
# XLM-RoBERTa-Large-En-Ru
## Model description
This model is a version XLM-RoBERTa with embeddings and vocabulary reduced to most frequent tokens in English and Russian.
| {"language": ["en", "ru"]} | DeepPavlov/xlm-roberta-large-en-ru | null | [
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"en",
"ru",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"ru"
] | TAGS
#transformers #pytorch #xlm-roberta #feature-extraction #en #ru #endpoints_compatible #region-us
|
# XLM-RoBERTa-Large-En-Ru
## Model description
This model is a version XLM-RoBERTa with embeddings and vocabulary reduced to most frequent tokens in English and Russian.
| [
"# XLM-RoBERTa-Large-En-Ru",
"## Model description\n\nThis model is a version XLM-RoBERTa with embeddings and vocabulary reduced to most frequent tokens in English and Russian."
] | [
"TAGS\n#transformers #pytorch #xlm-roberta #feature-extraction #en #ru #endpoints_compatible #region-us \n",
"# XLM-RoBERTa-Large-En-Ru",
"## Model description\n\nThis model is a version XLM-RoBERTa with embeddings and vocabulary reduced to most frequent tokens in English and Russian."
] |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Lithuanian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Lithuanian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
model = Wav2Vec2ForCTC.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Lithuanian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "lt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
model = Wav2Vec2ForCTC.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 56.55 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
| {"language": "lt", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Lithuanina by Deividas Mataciunas", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice lt", "type": "common_voice", "args": "lt"}, "metrics": [{"type": "wer", "value": 56.55, "name": "Test WER"}]}]}]} | DeividasM/wav2vec2-large-xlsr-53-lithuanian | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"lt",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"lt"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #lt #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Lithuanian
Fine-tuned facebook/wav2vec2-large-xlsr-53 in Lithuanian using the Common Voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Lithuanian test data of Common Voice.
Test Result: 56.55 %
## Training
The Common Voice 'train', 'validation' datasets were used for training.
| [
"# Wav2Vec2-Large-XLSR-53-Lithuanian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Lithuanian using the Common Voice\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Lithuanian test data of Common Voice.\n\n\n\nTest Result: 56.55 %",
"## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training."
] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #lt #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Lithuanian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Lithuanian using the Common Voice\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Lithuanian test data of Common Voice.\n\n\n\nTest Result: 56.55 %",
"## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training."
] |
null | transformers | Need to work with OpenDelta
```
from transformers import AutoModelForSeq2SeqLM
t5 = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
from opendelta import AutoDeltaModel
delta = AutoDeltaModel.from_finetuned("DeltaHub/lora_t5-base_mrpc", backbone_model=t5)
delta.log()
```
| {} | DeltaHub/lora_t5-base_mrpc | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #endpoints_compatible #region-us
| Need to work with OpenDelta
| [] | [
"TAGS\n#transformers #pytorch #endpoints_compatible #region-us \n"
] |
text-classification | transformers |
# Modèle de détection de 4 sentiments avec FlauBERT (mixed, negative, objective, positive)
### Comment l'utiliser ?
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
loaded_tokenizer = AutoTokenizer.from_pretrained('flaubert/flaubert_large_cased')
loaded_model = AutoModelForSequenceClassification.from_pretrained("DemangeJeremy/4-sentiments-with-flaubert")
nlp = pipeline('sentiment-analysis', model=loaded_model, tokenizer=loaded_tokenizer)
print(nlp("Je suis plutôt confiant."))
```
```
[{'label': 'OBJECTIVE', 'score': 0.3320835530757904}]
```
## Résultats de l'évaluation du modèle
| Epoch | Validation Loss | Samples Per Second |
|:------:|:--------------:|:------------------:|
| 1 | 2.219246 | 49.476000 |
| 2 | 1.883753 | 47.259000 |
| 3 | 1.747969 | 44.957000 |
| 4 | 1.695606 | 43.872000 |
| 5 | 1.641470 | 45.726000 |
## Citation
Pour toute utilisation de ce modèle, merci d'utiliser cette citation :
> Jérémy Demange, Four sentiments with FlauBERT, (2021), Hugging Face repository, <https://huggingface.co/DemangeJeremy/4-sentiments-with-flaubert>
| {"language": "fr", "tags": ["sentiments", "text-classification", "flaubert", "french", "flaubert-large"]} | DemangeJeremy/4-sentiments-with-flaubert | null | [
"transformers",
"pytorch",
"flaubert",
"text-classification",
"sentiments",
"french",
"flaubert-large",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fr"
] | TAGS
#transformers #pytorch #flaubert #text-classification #sentiments #french #flaubert-large #fr #autotrain_compatible #endpoints_compatible #region-us
| Modèle de détection de 4 sentiments avec FlauBERT (mixed, negative, objective, positive)
========================================================================================
### Comment l'utiliser ?
Résultats de l'évaluation du modèle
-----------------------------------
Pour toute utilisation de ce modèle, merci d'utiliser cette citation :
>
> Jérémy Demange, Four sentiments with FlauBERT, (2021), Hugging Face repository, <URL
>
>
>
| [
"### Comment l'utiliser ?\n\n\nRésultats de l'évaluation du modèle\n-----------------------------------\n\n\n\nPour toute utilisation de ce modèle, merci d'utiliser cette citation :\n\n\n\n> \n> Jérémy Demange, Four sentiments with FlauBERT, (2021), Hugging Face repository, <URL\n> \n> \n>"
] | [
"TAGS\n#transformers #pytorch #flaubert #text-classification #sentiments #french #flaubert-large #fr #autotrain_compatible #endpoints_compatible #region-us \n",
"### Comment l'utiliser ?\n\n\nRésultats de l'évaluation du modèle\n-----------------------------------\n\n\n\nPour toute utilisation de ce modèle, merci d'utiliser cette citation :\n\n\n\n> \n> Jérémy Demange, Four sentiments with FlauBERT, (2021), Hugging Face repository, <URL\n> \n> \n>"
] |
text-generation | transformers |
# Asuna Yuuki DialoGPT Model | {"tags": ["conversational"]} | Denny29/DialoGPT-medium-asunayuuki | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Asuna Yuuki DialoGPT Model | [
"# Asuna Yuuki DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Asuna Yuuki DialoGPT Model"
] |
null | null | title: ArcaneGAN
emoji: 🚀
colorFrom: blue
colorTo: blue
sdk: gradio
app_file: app.py
pinned: false | {} | Despin89/test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| title: ArcaneGAN
emoji:
colorFrom: blue
colorTo: blue
sdk: gradio
app_file: URL
pinned: false | [] | [
"TAGS\n#region-us \n"
] |
token-classification | transformers |
# Token classification for FOODs.
Detects foods in sentences.
Currently, only supports spanish. Multiple words foods are detected as one entity.
## To-do
- English support.
- Negation support.
- Quantity tags.
- Psychosocial tags. | {"widget": [{"text": "El paciente se alimenta de pan, sopa de calabaza y coca-cola"}]} | Dev-DGT/food-dbert-multiling | null | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #distilbert #token-classification #autotrain_compatible #endpoints_compatible #region-us
|
# Token classification for FOODs.
Detects foods in sentences.
Currently, only supports spanish. Multiple words foods are detected as one entity.
## To-do
- English support.
- Negation support.
- Quantity tags.
- Psychosocial tags. | [
"# Token classification for FOODs.\n\nDetects foods in sentences. \n\nCurrently, only supports spanish. Multiple words foods are detected as one entity.",
"## To-do\n\n- English support.\n- Negation support.\n- Quantity tags.\n- Psychosocial tags."
] | [
"TAGS\n#transformers #pytorch #distilbert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"# Token classification for FOODs.\n\nDetects foods in sentences. \n\nCurrently, only supports spanish. Multiple words foods are detected as one entity.",
"## To-do\n\n- English support.\n- Negation support.\n- Quantity tags.\n- Psychosocial tags."
] |
text-generation | transformers |
# Miku DialogGPT Model | {"tags": ["conversational"]} | Devid/DialoGPT-small-Miku | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Miku DialogGPT Model | [
"# Miku DialogGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Miku DialogGPT Model"
] |
null | null | The default Prism model available at https://github.com/thompsonb/prism. See the [README.md](https://github.com/thompsonb/prism/blob/master/README.md) file for more information.
**LICENCE NOTICE**
```
MIT License
Copyright (c) Brian Thompson
Portions of this software are copied from fairseq (https://github.com/pytorch/fairseq),
which is released under the MIT License and Copyright (c) Facebook, Inc. and its affiliates.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
``` | {"license": "mit"} | Devrim/prism-default | null | [
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#license-mit #region-us
| The default Prism model available at URL See the URL file for more information.
LICENCE NOTICE
| [] | [
"TAGS\n#license-mit #region-us \n"
] |
null | null | Hello | {} | DevsIA/imagenes | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| Hello | [] | [
"TAGS\n#region-us \n"
] |
automatic-speech-recognition | null |
# Wav2Vec2-Large-XLSR-Welsh
This model has moved to https://huggingface.co/techiaith/wav2vec2-xlsr-ft-cy
| {"language": "cy", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "wav2vec2-xlsr-welsh (by Dewi Bryn Jones, fine tuning week - March 2021)", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice cy", "type": "common_voice", "args": "cy"}, "metrics": [{"type": "wer", "value": "25.59%", "name": "Test WER"}]}]}]} | DewiBrynJones/wav2vec2-large-xlsr-welsh | null | [
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"cy",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"cy"
] | TAGS
#audio #automatic-speech-recognition #speech #xlsr-fine-tuning-week #cy #dataset-common_voice #license-apache-2.0 #model-index #region-us
|
# Wav2Vec2-Large-XLSR-Welsh
This model has moved to URL
| [
"# Wav2Vec2-Large-XLSR-Welsh\n\nThis model has moved to URL"
] | [
"TAGS\n#audio #automatic-speech-recognition #speech #xlsr-fine-tuning-week #cy #dataset-common_voice #license-apache-2.0 #model-index #region-us \n",
"# Wav2Vec2-Large-XLSR-Welsh\n\nThis model has moved to URL"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2915
- Bleu: 27.9273
- Gen Len: 34.0935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7448 | 1.0 | 38145 | 1.2915 | 27.9273 | 34.0935 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "metrics": ["bleu"], "model-index": [{"name": "opus-mt-en-ro-finetuned-en-to-ro", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 27.9273, "name": "Bleu"}]}]}]} | DiegoAlysson/opus-mt-en-ro-finetuned-en-to-ro | null | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| opus-mt-en-ro-finetuned-en-to-ro
================================
This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-ro on the wmt16 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2915
* Bleu: 27.9273
* Gen Len: 34.0935
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.15.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | Dilmk2/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model | [
"# Harry Potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
text-generation | transformers |
# V DialoGPT Model | {"tags": ["conversational"]} | Dimedrolza/DialoGPT-small-cyberpunk | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# V DialoGPT Model | [
"# V DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# V DialoGPT Model"
] |
text-generation | transformers |
# HomerBot: A conversational chatbot imitating Homer Simpson
This model is a fine-tuned [DialoGPT](https://huggingface.co/microsoft/DialoGPT-medium) (medium version) on Simpsons [scripts](https://www.kaggle.com/datasets/pierremegret/dialogue-lines-of-the-simpsons).
More specifically, we fine-tune DialoGPT-medium for 3 epochs on 10K **(character utterance, Homer's response)** pairs
For more details, check out our git [repo](https://github.com/jesseDingley/HomerBot) containing all the code.
### How to use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("DingleyMaillotUrgell/homer-bot")
model = AutoModelForCausalLM.from_pretrained("DingleyMaillotUrgell/homer-bot")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User: ") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids,
max_length=1000,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature = 0.8
)
# print last outpput tokens from bot
print("Homer: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
| {"language": ["en"], "tags": ["conversational"]} | jesseD/homer-bot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# HomerBot: A conversational chatbot imitating Homer Simpson
This model is a fine-tuned DialoGPT (medium version) on Simpsons scripts.
More specifically, we fine-tune DialoGPT-medium for 3 epochs on 10K (character utterance, Homer's response) pairs
For more details, check out our git repo containing all the code.
### How to use
| [
"# HomerBot: A conversational chatbot imitating Homer Simpson\n\nThis model is a fine-tuned DialoGPT (medium version) on Simpsons scripts.\n\nMore specifically, we fine-tune DialoGPT-medium for 3 epochs on 10K (character utterance, Homer's response) pairs\n\nFor more details, check out our git repo containing all the code.",
"### How to use"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# HomerBot: A conversational chatbot imitating Homer Simpson\n\nThis model is a fine-tuned DialoGPT (medium version) on Simpsons scripts.\n\nMore specifically, we fine-tune DialoGPT-medium for 3 epochs on 10K (character utterance, Homer's response) pairs\n\nFor more details, check out our git repo containing all the code.",
"### How to use"
] |
text-generation | transformers |
# Harry Potter DialoGPT Medium Model | {"tags": ["conversational"]} | Doiman/DialoGPT-medium-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Medium Model | [
"# Harry Potter DialoGPT Medium Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Medium Model"
] |
text-generation | transformers |
# Rick DialoGPT Model | {"tags": ["conversational"]} | DongHai/DialoGPT-small-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick DialoGPT Model | [
"# Rick DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick DialoGPT Model"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7335
- Matthews Correlation: 0.5356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5309 | 1.0 | 535 | 0.5070 | 0.4239 |
| 0.3568 | 2.0 | 1070 | 0.5132 | 0.4913 |
| 0.24 | 3.0 | 1605 | 0.6081 | 0.4990 |
| 0.1781 | 4.0 | 2140 | 0.7335 | 0.5356 |
| 0.1243 | 5.0 | 2675 | 0.8705 | 0.5242 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.535587402888147, "name": "Matthews Correlation"}]}]}]} | DongHyoungLee/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7335
* Matthews Correlation: 0.5356
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
question-answering | transformers | The Reader model is for Korean Question Answering
The backbone model is deepset/xlm-roberta-large-squad2.
It is a finetuned model with KorQuAD-v1 dataset.
As a result of verification using KorQuAD evaluation dataset, it showed approximately 87% and 92% respectively for the EM score and F1 score.
Thank you | {} | Dongjae/mrc2reader | null | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #xlm-roberta #question-answering #endpoints_compatible #region-us
| The Reader model is for Korean Question Answering
The backbone model is deepset/xlm-roberta-large-squad2.
It is a finetuned model with KorQuAD-v1 dataset.
As a result of verification using KorQuAD evaluation dataset, it showed approximately 87% and 92% respectively for the EM score and F1 score.
Thank you | [] | [
"TAGS\n#transformers #pytorch #xlm-roberta #question-answering #endpoints_compatible #region-us \n"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wayne_NLP_mT5
This model was trained only english datasets.
if you want trained korean + english model
go to wayne_mulang_mT5.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0a0+3fd9dcf
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"tags": ["generated_from_trainer"], "datasets": ["cnn_dailymail"], "model-index": [{"name": "Wayne_NLP_mT5", "results": []}]} | Waynehillsdev/Wayne_NLP_mT5 | null | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #mt5 #text2text-generation #generated_from_trainer #dataset-cnn_dailymail #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Wayne_NLP_mT5
This model was trained only english datasets.
if you want trained korean + english model
go to wayne_mulang_mT5.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0a0+3fd9dcf
- Datasets 1.18.3
- Tokenizers 0.11.0
| [
"# Wayne_NLP_mT5\n\nThis model was trained only english datasets.\nif you want trained korean + english model\ngo to wayne_mulang_mT5.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 10",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0a0+3fd9dcf\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #mt5 #text2text-generation #generated_from_trainer #dataset-cnn_dailymail #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Wayne_NLP_mT5\n\nThis model was trained only english datasets.\nif you want trained korean + english model\ngo to wayne_mulang_mT5.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 10",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0a0+3fd9dcf\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Waynehills-STT-doogie-server
This model is a fine-tuned version of [Doogie/Waynehills-STT-doogie-server](https://huggingface.co/Doogie/Waynehills-STT-doogie-server) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"]} | Waynehillsdev/Waynehills-STT-doogie-server | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
# Waynehills-STT-doogie-server
This model is a fine-tuned version of Doogie/Waynehills-STT-doogie-server on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
| [
"# Waynehills-STT-doogie-server\n\nThis model is a fine-tuned version of Doogie/Waynehills-STT-doogie-server on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 60",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu113\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Waynehills-STT-doogie-server\n\nThis model is a fine-tuned version of Doogie/Waynehills-STT-doogie-server on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 60",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu113\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Waynehills_summary_tensorflow
This model is a fine-tuned version of [KETI-AIR/ke-t5-base-ko](https://huggingface.co/KETI-AIR/ke-t5-base-ko) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_keras_callback"], "model-index": [{"name": "Waynehills_summary_tensorflow", "results": []}]} | Waynehillsdev/Waynehills_summary_tensorflow | null | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #tf #t5 #text2text-generation #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Waynehills_summary_tensorflow
This model is a fine-tuned version of KETI-AIR/ke-t5-base-ko on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
| [
"# Waynehills_summary_tensorflow\n\nThis model is a fine-tuned version of KETI-AIR/ke-t5-base-ko on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- TensorFlow 2.7.0\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #tf #t5 #text2text-generation #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Waynehills_summary_tensorflow\n\nThis model is a fine-tuned version of KETI-AIR/ke-t5-base-ko on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- TensorFlow 2.7.0\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4180
- Wer: 0.3392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.656 | 4.0 | 500 | 1.8973 | 1.0130 |
| 0.8647 | 8.0 | 1000 | 0.4667 | 0.4705 |
| 0.2968 | 12.0 | 1500 | 0.4211 | 0.4035 |
| 0.1719 | 16.0 | 2000 | 0.4725 | 0.3739 |
| 0.1272 | 20.0 | 2500 | 0.4586 | 0.3543 |
| 0.1079 | 24.0 | 3000 | 0.4356 | 0.3484 |
| 0.0808 | 28.0 | 3500 | 0.4180 | 0.3392 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]} | Waynehillsdev/wav2vec2-base-timit-demo-colab | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| wav2vec2-base-timit-demo-colab
==============================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4180
* Wer: 0.3392
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
question-answering | transformers | Model for Extraction-based MRC
original model : klue/roberta-large
Designed for ODQA Competition | {} | Doohae/roberta | null | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #question-answering #endpoints_compatible #region-us
| Model for Extraction-based MRC
original model : klue/roberta-large
Designed for ODQA Competition | [] | [
"TAGS\n#transformers #pytorch #roberta #question-answering #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
#Rick DialoGPT model | {"tags": ["conversational"]} | Doquey/DialoGPT-small-Luisbot1 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Rick DialoGPT model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
#Michael | {"tags": "conversational"} | Doquey/DialoGPT-small-Michaelbot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Michael | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Celestia Ludenburg DiabloGPT Model | {"tags": ["conversational"]} | Doxophobia/DialoGPT-medium-celeste | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Celestia Ludenburg DiabloGPT Model | [
"# Celestia Ludenburg DiabloGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Celestia Ludenburg DiabloGPT Model"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tmp_qubhe07
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1374, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_keras_callback"], "model-index": [{"name": "tmp_qubhe07", "results": []}]} | DoyyingFace/doyying_bert_first_again | null | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #tf #bert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us
|
# tmp_qubhe07
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1374, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
| [
"# tmp_qubhe07\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1374, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- TensorFlow 2.7.0\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us \n",
"# tmp_qubhe07\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1374, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- TensorFlow 2.7.0\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "dummy-model", "results": []}]} | DoyyingFace/dummy-model | null | [
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #tf #camembert #fill-mask #generated_from_keras_callback #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# dummy-model
This model is a fine-tuned version of camembert-base on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
| [
"# dummy-model\n\nThis model is a fine-tuned version of camembert-base on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- TensorFlow 2.7.0\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #tf #camembert #fill-mask #generated_from_keras_callback #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# dummy-model\n\nThis model is a fine-tuned version of camembert-base on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- TensorFlow 2.7.0\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
text-generation | transformers |
# Legacies DialoGPT Model | {"tags": ["conversational"]} | Dragoniod1596/DialoGPT-small-Legacies | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Legacies DialoGPT Model | [
"# Legacies DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Legacies DialoGPT Model"
] |
text-generation | transformers |
#Uncle Iroh DialoGPT Model | {"tags": ["conversational"]} | Dreyzin/DialoGPT-medium-avatar | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Uncle Iroh DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5620
- Wer: 0.5651
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-ab-CV7 --dataset mozilla-foundation/common_voice_7_0 --config ab --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
NA
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.6445 | 13.64 | 300 | 4.3963 | 1.0 |
| 3.6459 | 27.27 | 600 | 3.2267 | 1.0 |
| 3.0978 | 40.91 | 900 | 3.0927 | 1.0 |
| 2.8357 | 54.55 | 1200 | 2.1462 | 1.0029 |
| 1.2723 | 68.18 | 1500 | 0.6747 | 0.6996 |
| 0.6528 | 81.82 | 1800 | 0.5928 | 0.6422 |
| 0.4905 | 95.45 | 2100 | 0.5587 | 0.5681 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
| {"language": ["ab"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ab", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-ab-CV7", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "ab"}, "metrics": [{"type": "wer", "value": 0.5291160452450775, "name": "Test WER"}, {"type": "cer", "value": 0.10630270750110964, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ab"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-ab-CV7 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ab",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ab"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ab #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - AB dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5620
* Wer: 0.5651
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-ab-CV7 --dataset mozilla-foundation/common\_voice\_7\_0 --config ab --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
NA
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.17.1.dev0
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-ab-CV7 --dataset mozilla-foundation/common\\_voice\\_7\\_0 --config ab --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nNA",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ab #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-ab-CV7 --dataset mozilla-foundation/common\\_voice\\_7\\_0 --config ab --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nNA",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6178
- Wer: 0.5794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 70.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.2793 | 27.27 | 300 | 3.0737 | 1.0 |
| 1.5348 | 54.55 | 600 | 0.6312 | 0.6334 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
| {"language": ["ab"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-ab-v4 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ab"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ab #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - AB dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6178
* Wer: 0.5794
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.00025
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 70.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.17.1.dev0
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00025\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 70.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ab #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00025\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 70.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-as-g1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AS dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3327
- Wer: 0.5744
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-as-g1 --dataset mozilla-foundation/common_voice_8_0 --config as --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Assamese language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 14.1958 | 5.26 | 100 | 7.1919 | 1.0 |
| 5.0035 | 10.51 | 200 | 3.9362 | 1.0 |
| 3.6193 | 15.77 | 300 | 3.4451 | 1.0 |
| 3.4852 | 21.05 | 400 | 3.3536 | 1.0 |
| 2.8489 | 26.31 | 500 | 1.6451 | 0.9100 |
| 0.9568 | 31.56 | 600 | 1.0514 | 0.7561 |
| 0.4865 | 36.82 | 700 | 1.0434 | 0.7184 |
| 0.322 | 42.1 | 800 | 1.0825 | 0.7210 |
| 0.2383 | 47.36 | 900 | 1.1304 | 0.6897 |
| 0.2136 | 52.62 | 1000 | 1.1150 | 0.6854 |
| 0.179 | 57.87 | 1100 | 1.2453 | 0.6875 |
| 0.1539 | 63.15 | 1200 | 1.2211 | 0.6704 |
| 0.1303 | 68.41 | 1300 | 1.2859 | 0.6747 |
| 0.1183 | 73.67 | 1400 | 1.2775 | 0.6721 |
| 0.0994 | 78.92 | 1500 | 1.2321 | 0.6404 |
| 0.0991 | 84.21 | 1600 | 1.2766 | 0.6524 |
| 0.0887 | 89.46 | 1700 | 1.3026 | 0.6344 |
| 0.0754 | 94.72 | 1800 | 1.3199 | 0.6704 |
| 0.0693 | 99.97 | 1900 | 1.3044 | 0.6361 |
| 0.0568 | 105.26 | 2000 | 1.3541 | 0.6254 |
| 0.0536 | 110.51 | 2100 | 1.3320 | 0.6249 |
| 0.0529 | 115.77 | 2200 | 1.3370 | 0.6271 |
| 0.048 | 121.05 | 2300 | 1.2757 | 0.6031 |
| 0.0419 | 126.31 | 2400 | 1.2661 | 0.6172 |
| 0.0349 | 131.56 | 2500 | 1.2897 | 0.6048 |
| 0.0309 | 136.82 | 2600 | 1.2688 | 0.5962 |
| 0.0278 | 142.1 | 2700 | 1.2885 | 0.5954 |
| 0.0254 | 147.36 | 2800 | 1.2988 | 0.5915 |
| 0.0223 | 152.62 | 2900 | 1.3153 | 0.5941 |
| 0.0216 | 157.87 | 3000 | 1.2936 | 0.5937 |
| 0.0186 | 163.15 | 3100 | 1.2906 | 0.5877 |
| 0.0156 | 168.41 | 3200 | 1.3476 | 0.5962 |
| 0.0158 | 173.67 | 3300 | 1.3363 | 0.5847 |
| 0.0142 | 178.92 | 3400 | 1.3367 | 0.5847 |
| 0.0153 | 184.21 | 3500 | 1.3105 | 0.5757 |
| 0.0119 | 189.46 | 3600 | 1.3255 | 0.5705 |
| 0.0115 | 194.72 | 3700 | 1.3340 | 0.5787 |
| 0.0103 | 199.97 | 3800 | 1.3327 | 0.5744 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["as"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "as", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-as-g1", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "as"}, "metrics": [{"type": "wer", "value": 0.6540934419202743, "name": "Test WER"}, {"type": "cer", "value": 0.21454042646095625, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "as"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-as-g1 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"as",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"as"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #as #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-as-g1
===============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - AS dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3327
* Wer: 0.5744
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-as-g1 --dataset mozilla-foundation/common\_voice\_8\_0 --config as --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Assamese language isn't available in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 200
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-as-g1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config as --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nAssamese language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #as #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-as-g1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config as --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nAssamese language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-as-v9
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1679
- Wer: 0.5761
### Evaluation Command
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-as-v9 --dataset mozilla-foundation/common_voice_8_0 --config as --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Assamese (as) language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000111
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 8.3852 | 10.51 | 200 | 3.6402 | 1.0 |
| 3.5374 | 21.05 | 400 | 3.3894 | 1.0 |
| 2.8645 | 31.56 | 600 | 1.3143 | 0.8303 |
| 1.1784 | 42.1 | 800 | 0.9417 | 0.6661 |
| 0.7805 | 52.62 | 1000 | 0.9292 | 0.6237 |
| 0.5973 | 63.15 | 1200 | 0.9489 | 0.6014 |
| 0.4784 | 73.67 | 1400 | 0.9916 | 0.5962 |
| 0.4138 | 84.21 | 1600 | 1.0272 | 0.6121 |
| 0.3491 | 94.72 | 1800 | 1.0412 | 0.5984 |
| 0.3062 | 105.26 | 2000 | 1.0769 | 0.6005 |
| 0.2707 | 115.77 | 2200 | 1.0708 | 0.5752 |
| 0.2459 | 126.31 | 2400 | 1.1285 | 0.6009 |
| 0.2234 | 136.82 | 2600 | 1.1209 | 0.5949 |
| 0.2035 | 147.36 | 2800 | 1.1348 | 0.5842 |
| 0.1876 | 157.87 | 3000 | 1.1480 | 0.5872 |
| 0.1669 | 168.41 | 3200 | 1.1496 | 0.5838 |
| 0.1595 | 178.92 | 3400 | 1.1721 | 0.5778 |
| 0.1505 | 189.46 | 3600 | 1.1654 | 0.5744 |
| 0.1486 | 199.97 | 3800 | 1.1679 | 0.5761 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
| {"language": ["as"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "as", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-as-v9", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "hsb"}, "metrics": [{"type": "wer", "value": 0.6163737676810973, "name": "Test WER"}, {"type": "cer", "value": 0.19496397642093005, "name": "Test CER"}, {"type": "wer", "value": 61.64, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "as"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-as-v9 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"as",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"as"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #as #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-as-v9
===============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1679
* Wer: 0.5761
### Evaluation Command
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-as-v9 --dataset mozilla-foundation/common\_voice\_8\_0 --config as --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Assamese (as) language isn't available in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000111
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 300
* num\_epochs: 200
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.1
* Pytorch 1.10.0+cu111
* Datasets 1.18.2
* Tokenizers 0.11.0
| [
"### Evaluation Command\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-as-v9 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config as --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nAssamese (as) language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000111\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #as #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Command\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-as-v9 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config as --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nAssamese (as) language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000111\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
### Note: Files are missing. Probably, didn't get (git)pushed properly. :(
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1679
- Wer: 0.5761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000111
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 8.3852 | 10.51 | 200 | 3.6402 | 1.0 |
| 3.5374 | 21.05 | 400 | 3.3894 | 1.0 |
| 2.8645 | 31.56 | 600 | 1.3143 | 0.8303 |
| 1.1784 | 42.1 | 800 | 0.9417 | 0.6661 |
| 0.7805 | 52.62 | 1000 | 0.9292 | 0.6237 |
| 0.5973 | 63.15 | 1200 | 0.9489 | 0.6014 |
| 0.4784 | 73.67 | 1400 | 0.9916 | 0.5962 |
| 0.4138 | 84.21 | 1600 | 1.0272 | 0.6121 |
| 0.3491 | 94.72 | 1800 | 1.0412 | 0.5984 |
| 0.3062 | 105.26 | 2000 | 1.0769 | 0.6005 |
| 0.2707 | 115.77 | 2200 | 1.0708 | 0.5752 |
| 0.2459 | 126.31 | 2400 | 1.1285 | 0.6009 |
| 0.2234 | 136.82 | 2600 | 1.1209 | 0.5949 |
| 0.2035 | 147.36 | 2800 | 1.1348 | 0.5842 |
| 0.1876 | 157.87 | 3000 | 1.1480 | 0.5872 |
| 0.1669 | 168.41 | 3200 | 1.1496 | 0.5838 |
| 0.1595 | 178.92 | 3400 | 1.1721 | 0.5778 |
| 0.1505 | 189.46 | 3600 | 1.1654 | 0.5744 |
| 0.1486 | 199.97 | 3800 | 1.1679 | 0.5761 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
| {"language": ["as"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "as", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-as-with-LM-v2", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "hsb"}, "metrics": [{"type": "wer", "value": [], "name": "Test WER"}, {"type": "cer", "value": [], "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-as-with-LM-v2 | null | [
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"as",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"as"
] | TAGS
#automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #as #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-common_voice #license-apache-2.0 #model-index #region-us
| ### Note: Files are missing. Probably, didn't get (git)pushed properly. :(
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1679
* Wer: 0.5761
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000111
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 300
* num\_epochs: 200
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.1
* Pytorch 1.10.0+cu111
* Datasets 1.18.2
* Tokenizers 0.11.0
| [
"### Note: Files are missing. Probably, didn't get (git)pushed properly. :(\n\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\\_voice dataset.\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 1.1679\n* Wer: 0.5761\n\n\nModel description\n-----------------\n\n\nMore information needed\n\n\nIntended uses & limitations\n---------------------------\n\n\nMore information needed\n\n\nTraining and evaluation data\n----------------------------\n\n\nMore information needed\n\n\nTraining procedure\n------------------",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000111\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] | [
"TAGS\n#automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #as #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-common_voice #license-apache-2.0 #model-index #region-us \n",
"### Note: Files are missing. Probably, didn't get (git)pushed properly. :(\n\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\\_voice dataset.\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 1.1679\n* Wer: 0.5761\n\n\nModel description\n-----------------\n\n\nMore information needed\n\n\nIntended uses & limitations\n---------------------------\n\n\nMore information needed\n\n\nTraining and evaluation data\n----------------------------\n\n\nMore information needed\n\n\nTraining procedure\n------------------",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000111\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bas-v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BAS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5997
- Wer: 0.3870
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1 --dataset mozilla-foundation/common_voice_8_0 --config bas --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Basaa (bas) language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000111
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.7076 | 5.26 | 200 | 3.6361 | 1.0 |
| 3.1657 | 10.52 | 400 | 3.0101 | 1.0 |
| 2.3987 | 15.78 | 600 | 0.9125 | 0.6774 |
| 1.0079 | 21.05 | 800 | 0.6477 | 0.5352 |
| 0.7392 | 26.31 | 1000 | 0.5432 | 0.4929 |
| 0.6114 | 31.57 | 1200 | 0.5498 | 0.4639 |
| 0.5222 | 36.83 | 1400 | 0.5220 | 0.4561 |
| 0.4648 | 42.1 | 1600 | 0.5586 | 0.4289 |
| 0.4103 | 47.36 | 1800 | 0.5337 | 0.4082 |
| 0.3692 | 52.62 | 2000 | 0.5421 | 0.3861 |
| 0.3403 | 57.88 | 2200 | 0.5549 | 0.4096 |
| 0.3011 | 63.16 | 2400 | 0.5833 | 0.3925 |
| 0.2932 | 68.42 | 2600 | 0.5674 | 0.3815 |
| 0.2696 | 73.68 | 2800 | 0.5734 | 0.3889 |
| 0.2496 | 78.94 | 3000 | 0.5968 | 0.3985 |
| 0.2289 | 84.21 | 3200 | 0.5888 | 0.3893 |
| 0.2091 | 89.47 | 3400 | 0.5849 | 0.3852 |
| 0.2005 | 94.73 | 3600 | 0.5938 | 0.3875 |
| 0.1876 | 99.99 | 3800 | 0.5997 | 0.3870 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
| {"language": ["bas"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "bas", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-bas-v1", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "bas"}, "metrics": [{"type": "wer", "value": 0.3566497929130234, "name": "Test WER"}, {"type": "cer", "value": 0.1102657634184471, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "bas"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"bas",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"bas"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #bas #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-bas-v1
================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - BAS dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5997
* Wer: 0.3870
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1 --dataset mozilla-foundation/common\_voice\_8\_0 --config bas --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Basaa (bas) language isn't available in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000111
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 100
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.1
* Pytorch 1.10.0+cu111
* Datasets 1.18.2
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config bas --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nBasaa (bas) language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000111\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #bas #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config bas --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nBasaa (bas) language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000111\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bg-d2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3421
- Wer: 0.2860
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 --dataset mozilla-foundation/common_voice_8_0 --config bg --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 --dataset speech-recognition-community-v2/dev_data --config bg --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 700
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.8791 | 1.74 | 200 | 3.1902 | 1.0 |
| 3.0441 | 3.48 | 400 | 2.8098 | 0.9864 |
| 1.1499 | 5.22 | 600 | 0.4668 | 0.5014 |
| 0.4968 | 6.96 | 800 | 0.4162 | 0.4472 |
| 0.3553 | 8.7 | 1000 | 0.3580 | 0.3777 |
| 0.3027 | 10.43 | 1200 | 0.3422 | 0.3506 |
| 0.2562 | 12.17 | 1400 | 0.3556 | 0.3639 |
| 0.2272 | 13.91 | 1600 | 0.3621 | 0.3583 |
| 0.2125 | 15.65 | 1800 | 0.3436 | 0.3358 |
| 0.1904 | 17.39 | 2000 | 0.3650 | 0.3545 |
| 0.1695 | 19.13 | 2200 | 0.3366 | 0.3241 |
| 0.1532 | 20.87 | 2400 | 0.3550 | 0.3311 |
| 0.1453 | 22.61 | 2600 | 0.3582 | 0.3131 |
| 0.1359 | 24.35 | 2800 | 0.3524 | 0.3084 |
| 0.1233 | 26.09 | 3000 | 0.3503 | 0.2973 |
| 0.1114 | 27.83 | 3200 | 0.3434 | 0.2946 |
| 0.1051 | 29.57 | 3400 | 0.3474 | 0.2956 |
| 0.0965 | 31.3 | 3600 | 0.3426 | 0.2907 |
| 0.0923 | 33.04 | 3800 | 0.3478 | 0.2894 |
| 0.0894 | 34.78 | 4000 | 0.3421 | 0.2860 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["bg"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "bg", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-bg-d2", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "bg"}, "metrics": [{"type": "wer", "value": 0.28775471338792613, "name": "Test WER"}, {"type": "cer", "value": 0.06861971204625049, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "bg"}, "metrics": [{"type": "wer", "value": 0.49783147459727384, "name": "Test WER"}, {"type": "cer", "value": 0.1591062599627158, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "bg"}, "metrics": [{"type": "wer", "value": 51.25, "name": "Test WER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"bg",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"bg"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #bg #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-bg-d2
===============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - BG dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3421
* Wer: 0.2860
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 --dataset mozilla-foundation/common\_voice\_8\_0 --config bg --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 --dataset speech-recognition-community-v2/dev\_data --config bg --split validation --chunk\_length\_s 10 --stride\_length\_s 1
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.00025
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 700
* num\_epochs: 35
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config bg --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 --dataset speech-recognition-community-v2/dev\\_data --config bg --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00025\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 700\n* num\\_epochs: 35\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #bg #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config bg --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 --dataset speech-recognition-community-v2/dev\\_data --config bg --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00025\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 700\n* num\\_epochs: 35\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5197
- Wer: 0.4689
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 --dataset mozilla-foundation/common_voice_8_0 --config bg --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 --dataset speech-recognition-community-v2/dev_data --config bg --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.3711 | 2.61 | 300 | 4.3122 | 1.0 |
| 3.1653 | 5.22 | 600 | 3.1156 | 1.0 |
| 2.8904 | 7.83 | 900 | 2.8421 | 0.9918 |
| 0.9207 | 10.43 | 1200 | 0.9895 | 0.8689 |
| 0.6384 | 13.04 | 1500 | 0.6994 | 0.7700 |
| 0.5215 | 15.65 | 1800 | 0.5628 | 0.6443 |
| 0.4573 | 18.26 | 2100 | 0.5316 | 0.6174 |
| 0.3875 | 20.87 | 2400 | 0.4932 | 0.5779 |
| 0.3562 | 23.48 | 2700 | 0.4972 | 0.5475 |
| 0.3218 | 26.09 | 3000 | 0.4895 | 0.5219 |
| 0.2954 | 28.7 | 3300 | 0.5226 | 0.5192 |
| 0.287 | 31.3 | 3600 | 0.4957 | 0.5146 |
| 0.2587 | 33.91 | 3900 | 0.4944 | 0.4893 |
| 0.2496 | 36.52 | 4200 | 0.4976 | 0.4895 |
| 0.2365 | 39.13 | 4500 | 0.5185 | 0.4819 |
| 0.2264 | 41.74 | 4800 | 0.5152 | 0.4776 |
| 0.2224 | 44.35 | 5100 | 0.5031 | 0.4746 |
| 0.2096 | 46.96 | 5400 | 0.5062 | 0.4708 |
| 0.2038 | 49.57 | 5700 | 0.5217 | 0.4698 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["bg"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "bg", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-bg-v1", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "bg"}, "metrics": [{"type": "wer", "value": 0.4709579127785184, "name": "Test WER"}, {"type": "cer", "value": 0.10205125354383235, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "bg"}, "metrics": [{"type": "wer", "value": 0.7053128872366791, "name": "Test WER"}, {"type": "cer", "value": 0.210804311998487, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "bg"}, "metrics": [{"type": "wer", "value": 72.6, "name": "Test WER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"bg",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"bg"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #bg #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - BG dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5197
* Wer: 0.4689
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 --dataset mozilla-foundation/common\_voice\_8\_0 --config bg --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 --dataset speech-recognition-community-v2/dev\_data --config bg --split validation --chunk\_length\_s 10 --stride\_length\_s 1
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7e-05
* train\_batch\_size: 32
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 50.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config bg --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 --dataset speech-recognition-community-v2/dev\\_data --config bg --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #bg #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config bg --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 --dataset speech-recognition-community-v2/dev\\_data --config bg --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-br-d10
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1382
- Wer: 0.4895
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-br-d10 --dataset mozilla-foundation/common_voice_8_0 --config br --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Breton language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 13.611 | 0.68 | 100 | 5.8492 | 1.0 |
| 3.8176 | 1.35 | 200 | 3.2181 | 1.0 |
| 3.0457 | 2.03 | 300 | 3.0902 | 1.0 |
| 2.2632 | 2.7 | 400 | 1.4882 | 0.9426 |
| 1.1965 | 3.38 | 500 | 1.1396 | 0.7950 |
| 0.984 | 4.05 | 600 | 1.0216 | 0.7583 |
| 0.8036 | 4.73 | 700 | 1.0258 | 0.7202 |
| 0.7061 | 5.41 | 800 | 0.9710 | 0.6820 |
| 0.689 | 6.08 | 900 | 0.9731 | 0.6488 |
| 0.6063 | 6.76 | 1000 | 0.9442 | 0.6569 |
| 0.5215 | 7.43 | 1100 | 1.0221 | 0.6671 |
| 0.4965 | 8.11 | 1200 | 0.9266 | 0.6181 |
| 0.4321 | 8.78 | 1300 | 0.9050 | 0.5991 |
| 0.3762 | 9.46 | 1400 | 0.9801 | 0.6134 |
| 0.3747 | 10.14 | 1500 | 0.9210 | 0.5747 |
| 0.3554 | 10.81 | 1600 | 0.9720 | 0.6051 |
| 0.3148 | 11.49 | 1700 | 0.9672 | 0.6099 |
| 0.3176 | 12.16 | 1800 | 1.0120 | 0.5966 |
| 0.2915 | 12.84 | 1900 | 0.9490 | 0.5653 |
| 0.2696 | 13.51 | 2000 | 0.9394 | 0.5819 |
| 0.2569 | 14.19 | 2100 | 1.0197 | 0.5667 |
| 0.2395 | 14.86 | 2200 | 0.9771 | 0.5608 |
| 0.2367 | 15.54 | 2300 | 1.0516 | 0.5678 |
| 0.2153 | 16.22 | 2400 | 1.0097 | 0.5679 |
| 0.2092 | 16.89 | 2500 | 1.0143 | 0.5430 |
| 0.2046 | 17.57 | 2600 | 1.0884 | 0.5631 |
| 0.1937 | 18.24 | 2700 | 1.0113 | 0.5648 |
| 0.1752 | 18.92 | 2800 | 1.0056 | 0.5470 |
| 0.164 | 19.59 | 2900 | 1.0340 | 0.5508 |
| 0.1723 | 20.27 | 3000 | 1.0743 | 0.5615 |
| 0.1535 | 20.95 | 3100 | 1.0495 | 0.5465 |
| 0.1432 | 21.62 | 3200 | 1.0390 | 0.5333 |
| 0.1561 | 22.3 | 3300 | 1.0798 | 0.5590 |
| 0.1384 | 22.97 | 3400 | 1.1716 | 0.5449 |
| 0.1359 | 23.65 | 3500 | 1.1154 | 0.5420 |
| 0.1356 | 24.32 | 3600 | 1.0883 | 0.5387 |
| 0.1355 | 25.0 | 3700 | 1.1114 | 0.5504 |
| 0.1158 | 25.68 | 3800 | 1.1171 | 0.5388 |
| 0.1166 | 26.35 | 3900 | 1.1335 | 0.5403 |
| 0.1165 | 27.03 | 4000 | 1.1374 | 0.5248 |
| 0.1064 | 27.7 | 4100 | 1.0336 | 0.5298 |
| 0.0987 | 28.38 | 4200 | 1.0407 | 0.5216 |
| 0.104 | 29.05 | 4300 | 1.1012 | 0.5350 |
| 0.0894 | 29.73 | 4400 | 1.1016 | 0.5310 |
| 0.0912 | 30.41 | 4500 | 1.1383 | 0.5302 |
| 0.0972 | 31.08 | 4600 | 1.0851 | 0.5214 |
| 0.0832 | 31.76 | 4700 | 1.1705 | 0.5311 |
| 0.0859 | 32.43 | 4800 | 1.0750 | 0.5192 |
| 0.0811 | 33.11 | 4900 | 1.0900 | 0.5180 |
| 0.0825 | 33.78 | 5000 | 1.1271 | 0.5196 |
| 0.07 | 34.46 | 5100 | 1.1289 | 0.5141 |
| 0.0689 | 35.14 | 5200 | 1.0960 | 0.5101 |
| 0.068 | 35.81 | 5300 | 1.1377 | 0.5050 |
| 0.0776 | 36.49 | 5400 | 1.0880 | 0.5194 |
| 0.0642 | 37.16 | 5500 | 1.1027 | 0.5076 |
| 0.0607 | 37.84 | 5600 | 1.1293 | 0.5119 |
| 0.0607 | 38.51 | 5700 | 1.1229 | 0.5103 |
| 0.0545 | 39.19 | 5800 | 1.1168 | 0.5103 |
| 0.0562 | 39.86 | 5900 | 1.1206 | 0.5073 |
| 0.0484 | 40.54 | 6000 | 1.1710 | 0.5019 |
| 0.0499 | 41.22 | 6100 | 1.1511 | 0.5100 |
| 0.0455 | 41.89 | 6200 | 1.1488 | 0.5009 |
| 0.0475 | 42.57 | 6300 | 1.1196 | 0.4944 |
| 0.0413 | 43.24 | 6400 | 1.1654 | 0.4996 |
| 0.0389 | 43.92 | 6500 | 1.0961 | 0.4930 |
| 0.0428 | 44.59 | 6600 | 1.0955 | 0.4938 |
| 0.039 | 45.27 | 6700 | 1.1323 | 0.4955 |
| 0.0352 | 45.95 | 6800 | 1.1040 | 0.4930 |
| 0.0334 | 46.62 | 6900 | 1.1382 | 0.4942 |
| 0.0338 | 47.3 | 7000 | 1.1264 | 0.4911 |
| 0.0307 | 47.97 | 7100 | 1.1216 | 0.4881 |
| 0.0286 | 48.65 | 7200 | 1.1459 | 0.4894 |
| 0.0348 | 49.32 | 7300 | 1.1419 | 0.4906 |
| 0.0329 | 50.0 | 7400 | 1.1382 | 0.4895 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["br"], "license": "apache-2.0", "tags": ["generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-br-d10", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "br"}, "metrics": [{"type": "wer", "value": 0.5230357484228637, "name": "Test WER"}, {"type": "cer", "value": 0.1880661144228536, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "br"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-br-d10 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"br",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"br"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #br #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-br-d10
================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - BR dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1382
* Wer: 0.4895
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-br-d10 --dataset mozilla-foundation/common\_voice\_8\_0 --config br --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Breton language isn't available in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0004
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 800
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-br-d10 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config br --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nBreton language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0004\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 800\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #br #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-br-d10 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config br --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nBreton language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0004\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 800\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-br-d2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1257
- Wer: 0.4631
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-br-d2 --dataset mozilla-foundation/common_voice_8_0 --config br --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Breton language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00034
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 750
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 14.0379 | 0.68 | 100 | 5.6808 | 1.0 |
| 3.9145 | 1.35 | 200 | 3.1970 | 1.0 |
| 3.0293 | 2.03 | 300 | 2.9513 | 1.0 |
| 2.0927 | 2.7 | 400 | 1.4545 | 0.8887 |
| 1.1556 | 3.38 | 500 | 1.0966 | 0.7564 |
| 0.9628 | 4.05 | 600 | 0.9808 | 0.7364 |
| 0.7869 | 4.73 | 700 | 1.0488 | 0.7355 |
| 0.703 | 5.41 | 800 | 0.9500 | 0.6881 |
| 0.6657 | 6.08 | 900 | 0.9309 | 0.6259 |
| 0.5663 | 6.76 | 1000 | 0.9133 | 0.6357 |
| 0.496 | 7.43 | 1100 | 0.9890 | 0.6028 |
| 0.4748 | 8.11 | 1200 | 0.9469 | 0.5894 |
| 0.4135 | 8.78 | 1300 | 0.9270 | 0.6045 |
| 0.3579 | 9.46 | 1400 | 0.8818 | 0.5708 |
| 0.353 | 10.14 | 1500 | 0.9244 | 0.5781 |
| 0.334 | 10.81 | 1600 | 0.9009 | 0.5638 |
| 0.2917 | 11.49 | 1700 | 1.0132 | 0.5828 |
| 0.29 | 12.16 | 1800 | 0.9696 | 0.5668 |
| 0.2691 | 12.84 | 1900 | 0.9811 | 0.5455 |
| 0.25 | 13.51 | 2000 | 0.9951 | 0.5624 |
| 0.2467 | 14.19 | 2100 | 0.9653 | 0.5573 |
| 0.2242 | 14.86 | 2200 | 0.9714 | 0.5378 |
| 0.2066 | 15.54 | 2300 | 0.9829 | 0.5394 |
| 0.2075 | 16.22 | 2400 | 1.0547 | 0.5520 |
| 0.1923 | 16.89 | 2500 | 1.0014 | 0.5397 |
| 0.1919 | 17.57 | 2600 | 0.9978 | 0.5477 |
| 0.1908 | 18.24 | 2700 | 1.1064 | 0.5397 |
| 0.157 | 18.92 | 2800 | 1.0629 | 0.5238 |
| 0.159 | 19.59 | 2900 | 1.0642 | 0.5321 |
| 0.1652 | 20.27 | 3000 | 1.0207 | 0.5328 |
| 0.141 | 20.95 | 3100 | 0.9948 | 0.5312 |
| 0.1417 | 21.62 | 3200 | 1.0338 | 0.5328 |
| 0.1514 | 22.3 | 3300 | 1.0513 | 0.5313 |
| 0.1365 | 22.97 | 3400 | 1.0357 | 0.5291 |
| 0.1319 | 23.65 | 3500 | 1.0587 | 0.5167 |
| 0.1298 | 24.32 | 3600 | 1.0636 | 0.5236 |
| 0.1245 | 25.0 | 3700 | 1.1367 | 0.5280 |
| 0.1114 | 25.68 | 3800 | 1.0633 | 0.5200 |
| 0.1088 | 26.35 | 3900 | 1.0495 | 0.5210 |
| 0.1175 | 27.03 | 4000 | 1.0897 | 0.5095 |
| 0.1043 | 27.7 | 4100 | 1.0580 | 0.5309 |
| 0.0951 | 28.38 | 4200 | 1.0448 | 0.5067 |
| 0.1011 | 29.05 | 4300 | 1.0665 | 0.5137 |
| 0.0889 | 29.73 | 4400 | 1.0579 | 0.5026 |
| 0.0833 | 30.41 | 4500 | 1.0740 | 0.5037 |
| 0.0889 | 31.08 | 4600 | 1.0933 | 0.5083 |
| 0.0784 | 31.76 | 4700 | 1.0715 | 0.5089 |
| 0.0767 | 32.43 | 4800 | 1.0658 | 0.5049 |
| 0.0769 | 33.11 | 4900 | 1.1118 | 0.4979 |
| 0.0722 | 33.78 | 5000 | 1.1413 | 0.4986 |
| 0.0709 | 34.46 | 5100 | 1.0706 | 0.4885 |
| 0.0664 | 35.14 | 5200 | 1.1217 | 0.4884 |
| 0.0648 | 35.81 | 5300 | 1.1298 | 0.4941 |
| 0.0657 | 36.49 | 5400 | 1.1330 | 0.4920 |
| 0.0582 | 37.16 | 5500 | 1.0598 | 0.4835 |
| 0.0602 | 37.84 | 5600 | 1.1097 | 0.4943 |
| 0.0598 | 38.51 | 5700 | 1.0976 | 0.4876 |
| 0.0547 | 39.19 | 5800 | 1.0734 | 0.4825 |
| 0.0561 | 39.86 | 5900 | 1.0926 | 0.4850 |
| 0.0516 | 40.54 | 6000 | 1.1579 | 0.4751 |
| 0.0478 | 41.22 | 6100 | 1.1384 | 0.4706 |
| 0.0396 | 41.89 | 6200 | 1.1462 | 0.4739 |
| 0.0472 | 42.57 | 6300 | 1.1277 | 0.4732 |
| 0.0447 | 43.24 | 6400 | 1.1517 | 0.4752 |
| 0.0423 | 43.92 | 6500 | 1.1219 | 0.4784 |
| 0.0426 | 44.59 | 6600 | 1.1311 | 0.4724 |
| 0.0391 | 45.27 | 6700 | 1.1135 | 0.4692 |
| 0.0362 | 45.95 | 6800 | 1.0878 | 0.4645 |
| 0.0329 | 46.62 | 6900 | 1.1137 | 0.4668 |
| 0.0356 | 47.3 | 7000 | 1.1233 | 0.4687 |
| 0.0328 | 47.97 | 7100 | 1.1238 | 0.4653 |
| 0.0323 | 48.65 | 7200 | 1.1307 | 0.4646 |
| 0.0325 | 49.32 | 7300 | 1.1242 | 0.4645 |
| 0.03 | 50.0 | 7400 | 1.1257 | 0.4631 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["br"], "license": "apache-2.0", "tags": ["generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-br-d2", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "br"}, "metrics": [{"type": "wer", "value": 0.49770598355954887, "name": "Test WER"}, {"type": "cer", "value": 0.18090500890299605, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "br"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-br-d2 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"br",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"br"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #br #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-br-d2
===============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - BR dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1257
* Wer: 0.4631
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-br-d2 --dataset mozilla-foundation/common\_voice\_8\_0 --config br --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Breton language isn't available in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.00034
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 750
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-br-d2 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config br --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nBreton language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00034\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 750\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #br #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-br-d2 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config br --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nBreton language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00034\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 750\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-gn-k1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - GN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9220
- Wer: 0.6631
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-gn-k1 --dataset mozilla-foundation/common_voice_8_0 --config gn --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
NA
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00018
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 15.9402 | 8.32 | 100 | 6.9185 | 1.0 |
| 4.6367 | 16.64 | 200 | 3.7416 | 1.0 |
| 3.4337 | 24.96 | 300 | 3.2581 | 1.0 |
| 3.2307 | 33.32 | 400 | 2.8008 | 1.0 |
| 1.3182 | 41.64 | 500 | 0.8359 | 0.8171 |
| 0.409 | 49.96 | 600 | 0.8470 | 0.8323 |
| 0.2573 | 58.32 | 700 | 0.7823 | 0.7576 |
| 0.1969 | 66.64 | 800 | 0.8306 | 0.7424 |
| 0.1469 | 74.96 | 900 | 0.9225 | 0.7713 |
| 0.1172 | 83.32 | 1000 | 0.7903 | 0.6951 |
| 0.1017 | 91.64 | 1100 | 0.8519 | 0.6921 |
| 0.0851 | 99.96 | 1200 | 0.8129 | 0.6646 |
| 0.071 | 108.32 | 1300 | 0.8614 | 0.7043 |
| 0.061 | 116.64 | 1400 | 0.8414 | 0.6921 |
| 0.0552 | 124.96 | 1500 | 0.8649 | 0.6905 |
| 0.0465 | 133.32 | 1600 | 0.8575 | 0.6646 |
| 0.0381 | 141.64 | 1700 | 0.8802 | 0.6723 |
| 0.0338 | 149.96 | 1800 | 0.8731 | 0.6845 |
| 0.0306 | 158.32 | 1900 | 0.9003 | 0.6585 |
| 0.0236 | 166.64 | 2000 | 0.9408 | 0.6616 |
| 0.021 | 174.96 | 2100 | 0.9353 | 0.6723 |
| 0.0212 | 183.32 | 2200 | 0.9269 | 0.6570 |
| 0.0191 | 191.64 | 2300 | 0.9277 | 0.6662 |
| 0.0161 | 199.96 | 2400 | 0.9220 | 0.6631 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["gn"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "gn", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-gn-k1", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "gn"}, "metrics": [{"type": "wer", "value": 0.711890243902439, "name": "Test WER"}, {"type": "cer", "value": 0.13311897106109324, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "gn"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-gn-k1 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"gn",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"gn"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #gn #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-gn-k1
===============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - GN dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9220
* Wer: 0.6631
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-gn-k1 --dataset mozilla-foundation/common\_voice\_8\_0 --config gn --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
NA
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.00018
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 600
* num\_epochs: 200
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-gn-k1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config gn --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nNA",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00018\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 600\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #gn #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-gn-k1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config gn --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nNA",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00018\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 600\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-CV7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6588
- Wer: 0.2987
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7 --dataset mozilla-foundation/common_voice_7_0 --config hi --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
NA
### Training hyperparameters
The following hyperparameters were used during training:
#
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.809 | 1.36 | 200 | 6.2066 | 1.0 |
| 4.3402 | 2.72 | 400 | 3.5184 | 1.0 |
| 3.4365 | 4.08 | 600 | 3.2779 | 1.0 |
| 1.8643 | 5.44 | 800 | 0.9875 | 0.6270 |
| 0.7504 | 6.8 | 1000 | 0.6382 | 0.4666 |
| 0.5328 | 8.16 | 1200 | 0.6075 | 0.4505 |
| 0.4364 | 9.52 | 1400 | 0.5785 | 0.4215 |
| 0.3777 | 10.88 | 1600 | 0.6279 | 0.4227 |
| 0.3374 | 12.24 | 1800 | 0.6536 | 0.4192 |
| 0.3236 | 13.6 | 2000 | 0.5911 | 0.4047 |
| 0.2877 | 14.96 | 2200 | 0.5955 | 0.4097 |
| 0.2643 | 16.33 | 2400 | 0.5923 | 0.3744 |
| 0.2421 | 17.68 | 2600 | 0.6307 | 0.3814 |
| 0.2218 | 19.05 | 2800 | 0.6036 | 0.3764 |
| 0.2046 | 20.41 | 3000 | 0.6286 | 0.3797 |
| 0.191 | 21.77 | 3200 | 0.6517 | 0.3889 |
| 0.1856 | 23.13 | 3400 | 0.6193 | 0.3661 |
| 0.1721 | 24.49 | 3600 | 0.7034 | 0.3727 |
| 0.1656 | 25.85 | 3800 | 0.6293 | 0.3591 |
| 0.1532 | 27.21 | 4000 | 0.6075 | 0.3611 |
| 0.1507 | 28.57 | 4200 | 0.6313 | 0.3565 |
| 0.1381 | 29.93 | 4400 | 0.6564 | 0.3578 |
| 0.1359 | 31.29 | 4600 | 0.6724 | 0.3543 |
| 0.1248 | 32.65 | 4800 | 0.6789 | 0.3512 |
| 0.1198 | 34.01 | 5000 | 0.6442 | 0.3539 |
| 0.1125 | 35.37 | 5200 | 0.6676 | 0.3419 |
| 0.1036 | 36.73 | 5400 | 0.7017 | 0.3435 |
| 0.0982 | 38.09 | 5600 | 0.6828 | 0.3319 |
| 0.0971 | 39.45 | 5800 | 0.6112 | 0.3351 |
| 0.0968 | 40.81 | 6000 | 0.6424 | 0.3252 |
| 0.0893 | 42.18 | 6200 | 0.6707 | 0.3304 |
| 0.0878 | 43.54 | 6400 | 0.6432 | 0.3236 |
| 0.0827 | 44.89 | 6600 | 0.6696 | 0.3240 |
| 0.0788 | 46.26 | 6800 | 0.6564 | 0.3180 |
| 0.0753 | 47.62 | 7000 | 0.6574 | 0.3130 |
| 0.0674 | 48.98 | 7200 | 0.6698 | 0.3175 |
| 0.0676 | 50.34 | 7400 | 0.6441 | 0.3142 |
| 0.0626 | 51.7 | 7600 | 0.6642 | 0.3121 |
| 0.0617 | 53.06 | 7800 | 0.6615 | 0.3117 |
| 0.0599 | 54.42 | 8000 | 0.6634 | 0.3059 |
| 0.0538 | 55.78 | 8200 | 0.6464 | 0.3033 |
| 0.0571 | 57.14 | 8400 | 0.6503 | 0.3018 |
| 0.0491 | 58.5 | 8600 | 0.6625 | 0.3025 |
| 0.0511 | 59.86 | 8800 | 0.6588 | 0.2987 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "hi", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-hi-CV7", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 35.31946325249292, "name": "Test WER"}, {"type": "cer", "value": 11.310803379493075, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "vot"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"hi",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hi"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #hi #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-hi-CV7
================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - HI dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6588
* Wer: 0.2987
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7 --dataset mozilla-foundation/common\_voice\_7\_0 --config hi --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
NA
### Training hyperparameters
The following hyperparameters were used during training:
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 60
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7 --dataset mozilla-foundation/common\\_voice\\_7\\_0 --config hi --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nNA",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 60\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #hi #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7 --dataset mozilla-foundation/common\\_voice\\_7\\_0 --config hi --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nNA",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 60\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-cv8-b2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7322
- Wer: 0.3469
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8-b2 --dataset mozilla-foundation/common_voice_8_0 --config hi --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Hindi language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 700
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.6226 | 1.04 | 200 | 3.8855 | 1.0 |
| 3.4678 | 2.07 | 400 | 3.4283 | 1.0 |
| 2.3668 | 3.11 | 600 | 1.0743 | 0.7175 |
| 0.7308 | 4.15 | 800 | 0.7663 | 0.5498 |
| 0.4985 | 5.18 | 1000 | 0.6957 | 0.5001 |
| 0.3817 | 6.22 | 1200 | 0.6932 | 0.4866 |
| 0.3281 | 7.25 | 1400 | 0.7034 | 0.4983 |
| 0.2752 | 8.29 | 1600 | 0.6588 | 0.4606 |
| 0.2475 | 9.33 | 1800 | 0.6514 | 0.4328 |
| 0.219 | 10.36 | 2000 | 0.6396 | 0.4176 |
| 0.2036 | 11.4 | 2200 | 0.6867 | 0.4162 |
| 0.1793 | 12.44 | 2400 | 0.6943 | 0.4196 |
| 0.1724 | 13.47 | 2600 | 0.6862 | 0.4260 |
| 0.1554 | 14.51 | 2800 | 0.7615 | 0.4222 |
| 0.151 | 15.54 | 3000 | 0.7058 | 0.4110 |
| 0.1335 | 16.58 | 3200 | 0.7172 | 0.3986 |
| 0.1326 | 17.62 | 3400 | 0.7182 | 0.3923 |
| 0.1225 | 18.65 | 3600 | 0.6995 | 0.3910 |
| 0.1146 | 19.69 | 3800 | 0.7075 | 0.3875 |
| 0.108 | 20.73 | 4000 | 0.7297 | 0.3858 |
| 0.1048 | 21.76 | 4200 | 0.7413 | 0.3850 |
| 0.0979 | 22.8 | 4400 | 0.7452 | 0.3793 |
| 0.0946 | 23.83 | 4600 | 0.7436 | 0.3759 |
| 0.0897 | 24.87 | 4800 | 0.7289 | 0.3754 |
| 0.0854 | 25.91 | 5000 | 0.7271 | 0.3667 |
| 0.0803 | 26.94 | 5200 | 0.7378 | 0.3656 |
| 0.0752 | 27.98 | 5400 | 0.7488 | 0.3680 |
| 0.0718 | 29.02 | 5600 | 0.7185 | 0.3619 |
| 0.0702 | 30.05 | 5800 | 0.7428 | 0.3554 |
| 0.0653 | 31.09 | 6000 | 0.7447 | 0.3559 |
| 0.0638 | 32.12 | 6200 | 0.7327 | 0.3523 |
| 0.058 | 33.16 | 6400 | 0.7339 | 0.3488 |
| 0.0594 | 34.2 | 6600 | 0.7322 | 0.3469 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-hi-cv8-b2", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_8_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 0.3891350503092403, "name": "Test WER"}, {"type": "cer", "value": 0.13016327327131985, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "hi"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8-b2 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"robust-speech-event",
"hf-asr-leaderboard",
"hi",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hi"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #robust-speech-event #hf-asr-leaderboard #hi #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-hi-cv8-b2
===================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - HI dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7322
* Wer: 0.3469
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8-b2 --dataset mozilla-foundation/common\_voice\_8\_0 --config hi --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Hindi language isn't available in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.00025
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 700
* num\_epochs: 35
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8-b2 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config hi --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nHindi language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00025\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 700\n* num\\_epochs: 35\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #robust-speech-event #hf-asr-leaderboard #hi #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8-b2 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config hi --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nHindi language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00025\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 700\n* num\\_epochs: 35\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6510
- Wer: 0.3179
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8 --dataset mozilla-foundation/common_voice_8_0 --config hi --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8 --dataset speech-recognition-community-v2/dev_data --config hi --split validation --chunk_length_s 10 --stride_length_s 1
Note: Hindi language not found in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.5576 | 1.04 | 200 | 6.6594 | 1.0 |
| 4.4069 | 2.07 | 400 | 3.6011 | 1.0 |
| 3.4273 | 3.11 | 600 | 3.3370 | 1.0 |
| 2.1108 | 4.15 | 800 | 1.0641 | 0.6562 |
| 0.8817 | 5.18 | 1000 | 0.7178 | 0.5172 |
| 0.6508 | 6.22 | 1200 | 0.6612 | 0.4839 |
| 0.5524 | 7.25 | 1400 | 0.6458 | 0.4889 |
| 0.4992 | 8.29 | 1600 | 0.5791 | 0.4382 |
| 0.4669 | 9.33 | 1800 | 0.6039 | 0.4352 |
| 0.4441 | 10.36 | 2000 | 0.6276 | 0.4297 |
| 0.4172 | 11.4 | 2200 | 0.6183 | 0.4474 |
| 0.3872 | 12.44 | 2400 | 0.5886 | 0.4231 |
| 0.3692 | 13.47 | 2600 | 0.6448 | 0.4399 |
| 0.3385 | 14.51 | 2800 | 0.6344 | 0.4075 |
| 0.3246 | 15.54 | 3000 | 0.5896 | 0.4087 |
| 0.3026 | 16.58 | 3200 | 0.6158 | 0.4016 |
| 0.284 | 17.62 | 3400 | 0.6038 | 0.3906 |
| 0.2682 | 18.65 | 3600 | 0.6165 | 0.3900 |
| 0.2577 | 19.69 | 3800 | 0.5754 | 0.3805 |
| 0.2509 | 20.73 | 4000 | 0.6028 | 0.3925 |
| 0.2426 | 21.76 | 4200 | 0.6335 | 0.4138 |
| 0.2346 | 22.8 | 4400 | 0.6128 | 0.3870 |
| 0.2205 | 23.83 | 4600 | 0.6223 | 0.3831 |
| 0.2104 | 24.87 | 4800 | 0.6122 | 0.3781 |
| 0.1992 | 25.91 | 5000 | 0.6467 | 0.3792 |
| 0.1916 | 26.94 | 5200 | 0.6277 | 0.3636 |
| 0.1835 | 27.98 | 5400 | 0.6317 | 0.3773 |
| 0.1776 | 29.02 | 5600 | 0.6124 | 0.3614 |
| 0.1751 | 30.05 | 5800 | 0.6475 | 0.3628 |
| 0.1662 | 31.09 | 6000 | 0.6266 | 0.3504 |
| 0.1584 | 32.12 | 6200 | 0.6347 | 0.3532 |
| 0.1494 | 33.16 | 6400 | 0.6636 | 0.3491 |
| 0.1457 | 34.2 | 6600 | 0.6334 | 0.3507 |
| 0.1427 | 35.23 | 6800 | 0.6397 | 0.3442 |
| 0.1397 | 36.27 | 7000 | 0.6468 | 0.3496 |
| 0.1283 | 37.31 | 7200 | 0.6291 | 0.3416 |
| 0.1255 | 38.34 | 7400 | 0.6652 | 0.3461 |
| 0.1195 | 39.38 | 7600 | 0.6587 | 0.3342 |
| 0.1169 | 40.41 | 7800 | 0.6478 | 0.3319 |
| 0.1126 | 41.45 | 8000 | 0.6280 | 0.3291 |
| 0.1112 | 42.49 | 8200 | 0.6434 | 0.3290 |
| 0.1069 | 43.52 | 8400 | 0.6542 | 0.3268 |
| 0.1027 | 44.56 | 8600 | 0.6536 | 0.3239 |
| 0.0993 | 45.6 | 8800 | 0.6622 | 0.3257 |
| 0.0973 | 46.63 | 9000 | 0.6572 | 0.3192 |
| 0.0911 | 47.67 | 9200 | 0.6522 | 0.3175 |
| 0.0897 | 48.7 | 9400 | 0.6521 | 0.3200 |
| 0.0905 | 49.74 | 9600 | 0.6510 | 0.3179 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hi", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-hi-cv8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 0.3628727037755008, "name": "Test WER"}, {"type": "cer", "value": 0.11933724247521164, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "hi"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hi",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hi"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #hi #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-hi-cv8
================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - HI dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6510
* Wer: 0.3179
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8 --dataset mozilla-foundation/common\_voice\_8\_0 --config hi --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8 --dataset speech-recognition-community-v2/dev\_data --config hi --split validation --chunk\_length\_s 10 --stride\_length\_s 1
Note: Hindi language not found in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config hi --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8 --dataset speech-recognition-community-v2/dev\\_data --config hi --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1\n\n\nNote: Hindi language not found in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #hi #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config hi --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8 --dataset speech-recognition-community-v2/dev\\_data --config hi --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1\n\n\nNote: Hindi language not found in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-d3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7988
- Wer: 0.3713
###Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3 --dataset mozilla-foundation/common_voice_7_0 --config hi --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Hindi language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000388
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 750
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.2826 | 1.36 | 200 | 3.5253 | 1.0 |
| 2.7019 | 2.72 | 400 | 1.1744 | 0.7360 |
| 0.7358 | 4.08 | 600 | 0.7781 | 0.5501 |
| 0.4942 | 5.44 | 800 | 0.7590 | 0.5345 |
| 0.4056 | 6.8 | 1000 | 0.6885 | 0.4776 |
| 0.3243 | 8.16 | 1200 | 0.7195 | 0.4861 |
| 0.2785 | 9.52 | 1400 | 0.7473 | 0.4930 |
| 0.2448 | 10.88 | 1600 | 0.7201 | 0.4574 |
| 0.2155 | 12.24 | 1800 | 0.7686 | 0.4648 |
| 0.2039 | 13.6 | 2000 | 0.7440 | 0.4624 |
| 0.1792 | 14.96 | 2200 | 0.7815 | 0.4658 |
| 0.1695 | 16.33 | 2400 | 0.7678 | 0.4557 |
| 0.1598 | 17.68 | 2600 | 0.7468 | 0.4393 |
| 0.1568 | 19.05 | 2800 | 0.7440 | 0.4422 |
| 0.1391 | 20.41 | 3000 | 0.7656 | 0.4317 |
| 0.1283 | 21.77 | 3200 | 0.7892 | 0.4299 |
| 0.1194 | 23.13 | 3400 | 0.7646 | 0.4192 |
| 0.1116 | 24.49 | 3600 | 0.8156 | 0.4330 |
| 0.1111 | 25.85 | 3800 | 0.7661 | 0.4322 |
| 0.1023 | 27.21 | 4000 | 0.7419 | 0.4276 |
| 0.1007 | 28.57 | 4200 | 0.8488 | 0.4245 |
| 0.0925 | 29.93 | 4400 | 0.8062 | 0.4070 |
| 0.0918 | 31.29 | 4600 | 0.8412 | 0.4218 |
| 0.0813 | 32.65 | 4800 | 0.8045 | 0.4087 |
| 0.0805 | 34.01 | 5000 | 0.8411 | 0.4113 |
| 0.0774 | 35.37 | 5200 | 0.7664 | 0.3943 |
| 0.0666 | 36.73 | 5400 | 0.8082 | 0.3939 |
| 0.0655 | 38.09 | 5600 | 0.7948 | 0.4000 |
| 0.0617 | 39.45 | 5800 | 0.8084 | 0.3932 |
| 0.0606 | 40.81 | 6000 | 0.8223 | 0.3841 |
| 0.0569 | 42.18 | 6200 | 0.7892 | 0.3832 |
| 0.0544 | 43.54 | 6400 | 0.8326 | 0.3834 |
| 0.0508 | 44.89 | 6600 | 0.7952 | 0.3774 |
| 0.0492 | 46.26 | 6800 | 0.7923 | 0.3756 |
| 0.0459 | 47.62 | 7000 | 0.7925 | 0.3701 |
| 0.0423 | 48.98 | 7200 | 0.7988 | 0.3713 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "hi", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-hi-d3", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "vot"}, "metrics": [{"type": "wer", "value": 0.4204111781361566, "name": "Test WER"}, {"type": "cer", "value": 0.13869169624556316, "name": "Test CER"}, {"type": "wer", "value": 42.04, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "hi"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"hi",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hi"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #hi #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-hi-d3
===============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - HI dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7988
* Wer: 0.3713
###Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3 --dataset mozilla-foundation/common\_voice\_7\_0 --config hi --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Hindi language isn't available in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000388
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 750
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000388\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 750\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #hi #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000388\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 750\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-wx1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 -HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6552
- Wer: 0.3200
Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-wx1 --dataset mozilla-foundation/common_voice_7_0 --config hi --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
NA
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00024
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1800
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.2663 | 1.36 | 200 | 5.9245 | 1.0 |
| 4.1856 | 2.72 | 400 | 3.4968 | 1.0 |
| 3.3908 | 4.08 | 600 | 2.9970 | 1.0 |
| 1.5444 | 5.44 | 800 | 0.9071 | 0.6139 |
| 0.7237 | 6.8 | 1000 | 0.6508 | 0.4862 |
| 0.5323 | 8.16 | 1200 | 0.6217 | 0.4647 |
| 0.4426 | 9.52 | 1400 | 0.5785 | 0.4288 |
| 0.3933 | 10.88 | 1600 | 0.5935 | 0.4217 |
| 0.3532 | 12.24 | 1800 | 0.6358 | 0.4465 |
| 0.3319 | 13.6 | 2000 | 0.5789 | 0.4118 |
| 0.2877 | 14.96 | 2200 | 0.6163 | 0.4056 |
| 0.2663 | 16.33 | 2400 | 0.6176 | 0.3893 |
| 0.2511 | 17.68 | 2600 | 0.6065 | 0.3999 |
| 0.2275 | 19.05 | 2800 | 0.6183 | 0.3842 |
| 0.2098 | 20.41 | 3000 | 0.6486 | 0.3864 |
| 0.1943 | 21.77 | 3200 | 0.6365 | 0.3885 |
| 0.1877 | 23.13 | 3400 | 0.6013 | 0.3677 |
| 0.1679 | 24.49 | 3600 | 0.6451 | 0.3795 |
| 0.1667 | 25.85 | 3800 | 0.6410 | 0.3635 |
| 0.1514 | 27.21 | 4000 | 0.6000 | 0.3577 |
| 0.1453 | 28.57 | 4200 | 0.6020 | 0.3518 |
| 0.134 | 29.93 | 4400 | 0.6531 | 0.3517 |
| 0.1354 | 31.29 | 4600 | 0.6874 | 0.3578 |
| 0.1224 | 32.65 | 4800 | 0.6519 | 0.3492 |
| 0.1199 | 34.01 | 5000 | 0.6553 | 0.3490 |
| 0.1077 | 35.37 | 5200 | 0.6621 | 0.3429 |
| 0.0997 | 36.73 | 5400 | 0.6641 | 0.3413 |
| 0.0964 | 38.09 | 5600 | 0.6722 | 0.3385 |
| 0.0931 | 39.45 | 5800 | 0.6365 | 0.3363 |
| 0.0944 | 40.81 | 6000 | 0.6454 | 0.3326 |
| 0.0862 | 42.18 | 6200 | 0.6497 | 0.3256 |
| 0.0848 | 43.54 | 6400 | 0.6599 | 0.3226 |
| 0.0793 | 44.89 | 6600 | 0.6625 | 0.3232 |
| 0.076 | 46.26 | 6800 | 0.6463 | 0.3186 |
| 0.0749 | 47.62 | 7000 | 0.6559 | 0.3225 |
| 0.0663 | 48.98 | 7200 | 0.6552 | 0.3200 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-hi-wx1", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 37.19684845500431, "name": "Test WER"}, {"type": "cer", "value": 11.763235514672798, "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-hi-wx1 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"robust-speech-event",
"hi",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hi"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #robust-speech-event #hi #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-hi-wx1
================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 -HI dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6552
* Wer: 0.3200
Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-wx1 --dataset mozilla-foundation/common\_voice\_7\_0 --config hi --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
NA
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.00024
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1800
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00024\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1800\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #robust-speech-event #hi #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00024\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1800\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hsb-v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5684
- Wer: 0.4402
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v1 --dataset mozilla-foundation/common_voice_8_0 --config hsb --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Upper Sorbian language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00045
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.972 | 3.23 | 100 | 3.7498 | 1.0 |
| 3.3401 | 6.45 | 200 | 3.2320 | 1.0 |
| 3.2046 | 9.68 | 300 | 3.1741 | 0.9806 |
| 2.4031 | 12.9 | 400 | 1.0579 | 0.8996 |
| 1.0427 | 16.13 | 500 | 0.7989 | 0.7557 |
| 0.741 | 19.35 | 600 | 0.6405 | 0.6299 |
| 0.5699 | 22.58 | 700 | 0.6129 | 0.5928 |
| 0.4607 | 25.81 | 800 | 0.6548 | 0.5695 |
| 0.3827 | 29.03 | 900 | 0.6268 | 0.5190 |
| 0.3282 | 32.26 | 1000 | 0.5919 | 0.5016 |
| 0.2764 | 35.48 | 1100 | 0.5953 | 0.4805 |
| 0.2335 | 38.71 | 1200 | 0.5717 | 0.4728 |
| 0.2106 | 41.94 | 1300 | 0.5674 | 0.4569 |
| 0.1859 | 45.16 | 1400 | 0.5685 | 0.4502 |
| 0.1592 | 48.39 | 1500 | 0.5684 | 0.4402 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
| {"language": ["hsb"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hsb", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-hsb-v1", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "hsb"}, "metrics": [{"type": "wer", "value": 0.4393, "name": "Test WER"}, {"type": "cer", "value": 0.1036, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "hsb"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v1 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hsb",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hsb"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #hsb #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-hsb-v1
================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - HSB dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5684
* Wer: 0.4402
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v1 --dataset mozilla-foundation/common\_voice\_8\_0 --config hsb --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Upper Sorbian language isn't available in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.00045
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.1
* Pytorch 1.10.0+cu111
* Datasets 1.18.2
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config hsb --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nUpper Sorbian language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00045\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #hsb #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config hsb --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nUpper Sorbian language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00045\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hsb-v2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5328
- Wer: 0.4596
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v2 --dataset mozilla-foundation/common_voice_8_0 --config hsb --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Upper Sorbian (hsb) not found in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00045
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.5979 | 3.23 | 100 | 3.5602 | 1.0 |
| 3.303 | 6.45 | 200 | 3.2238 | 1.0 |
| 3.2034 | 9.68 | 300 | 3.2002 | 0.9888 |
| 2.7986 | 12.9 | 400 | 1.2408 | 0.9210 |
| 1.3869 | 16.13 | 500 | 0.7973 | 0.7462 |
| 1.0228 | 19.35 | 600 | 0.6722 | 0.6788 |
| 0.8311 | 22.58 | 700 | 0.6100 | 0.6150 |
| 0.717 | 25.81 | 800 | 0.6236 | 0.6013 |
| 0.6264 | 29.03 | 900 | 0.6031 | 0.5575 |
| 0.5494 | 32.26 | 1000 | 0.5656 | 0.5309 |
| 0.4781 | 35.48 | 1100 | 0.5289 | 0.4996 |
| 0.4311 | 38.71 | 1200 | 0.5375 | 0.4768 |
| 0.3902 | 41.94 | 1300 | 0.5246 | 0.4703 |
| 0.3508 | 45.16 | 1400 | 0.5382 | 0.4696 |
| 0.3199 | 48.39 | 1500 | 0.5328 | 0.4596 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
| {"language": ["hsb"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hsb", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-hsb-v2", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "hsb"}, "metrics": [{"type": "wer", "value": 0.4654228855721393, "name": "Test WER"}, {"type": "cer", "value": 0.11351049990708047, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "hsb"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v2 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hsb",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hsb"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #hsb #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-hsb-v2
================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - HSB dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5328
* Wer: 0.4596
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v2 --dataset mozilla-foundation/common\_voice\_8\_0 --config hsb --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Upper Sorbian (hsb) not found in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.00045
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.1
* Pytorch 1.10.0+cu111
* Datasets 1.18.2
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v2 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config hsb --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nUpper Sorbian (hsb) not found in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00045\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #hsb #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v2 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config hsb --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nUpper Sorbian (hsb) not found in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00045\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hsb-v3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6549
- Wer: 0.4827
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v3 --dataset mozilla-foundation/common_voice_8_0 --config hsb --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Upper Sorbian (hsb) language not found in speech-recognition-community-v2/dev_data!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00045
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.8951 | 3.23 | 100 | 3.6396 | 1.0 |
| 3.314 | 6.45 | 200 | 3.2331 | 1.0 |
| 3.1931 | 9.68 | 300 | 3.0947 | 0.9906 |
| 1.7079 | 12.9 | 400 | 0.8865 | 0.8499 |
| 0.6859 | 16.13 | 500 | 0.7994 | 0.7529 |
| 0.4804 | 19.35 | 600 | 0.7783 | 0.7069 |
| 0.3506 | 22.58 | 700 | 0.6904 | 0.6321 |
| 0.2695 | 25.81 | 800 | 0.6519 | 0.5926 |
| 0.222 | 29.03 | 900 | 0.7041 | 0.5720 |
| 0.1828 | 32.26 | 1000 | 0.6608 | 0.5513 |
| 0.1474 | 35.48 | 1100 | 0.7129 | 0.5319 |
| 0.1269 | 38.71 | 1200 | 0.6664 | 0.5056 |
| 0.1077 | 41.94 | 1300 | 0.6712 | 0.4942 |
| 0.0934 | 45.16 | 1400 | 0.6467 | 0.4879 |
| 0.0819 | 48.39 | 1500 | 0.6549 | 0.4827 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
| {"language": ["hsb"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hsb", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-hsb-v3", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "hsb"}, "metrics": [{"type": "wer", "value": 0.4763681592039801, "name": "Test WER"}, {"type": "cer", "value": 0.11194945177476305, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "hsb"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v3 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hsb",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hsb"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #hsb #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-hsb-v3
================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - HSB dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6549
* Wer: 0.4827
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v3 --dataset mozilla-foundation/common\_voice\_8\_0 --config hsb --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Upper Sorbian (hsb) language not found in speech-recognition-community-v2/dev\_data!
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.00045
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.1
* Pytorch 1.10.0+cu111
* Datasets 1.18.2
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v3 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config hsb --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nUpper Sorbian (hsb) language not found in speech-recognition-community-v2/dev\\_data!",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00045\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #hsb #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v3 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config hsb --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nUpper Sorbian (hsb) language not found in speech-recognition-community-v2/dev\\_data!",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00045\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - KK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7149
- Wer: 0.451
# Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-kk-with-LM --dataset mozilla-foundation/common_voice_8_0 --config kk --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Kazakh language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000222
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 150.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 9.6799 | 9.09 | 200 | 3.6119 | 1.0 |
| 3.1332 | 18.18 | 400 | 2.5352 | 1.005 |
| 1.0465 | 27.27 | 600 | 0.6169 | 0.682 |
| 0.3452 | 36.36 | 800 | 0.6572 | 0.607 |
| 0.2575 | 45.44 | 1000 | 0.6527 | 0.578 |
| 0.2088 | 54.53 | 1200 | 0.6828 | 0.551 |
| 0.158 | 63.62 | 1400 | 0.7074 | 0.5575 |
| 0.1309 | 72.71 | 1600 | 0.6523 | 0.5595 |
| 0.1074 | 81.8 | 1800 | 0.7262 | 0.5415 |
| 0.087 | 90.89 | 2000 | 0.7199 | 0.521 |
| 0.0711 | 99.98 | 2200 | 0.7113 | 0.523 |
| 0.0601 | 109.09 | 2400 | 0.6863 | 0.496 |
| 0.0451 | 118.18 | 2600 | 0.6998 | 0.483 |
| 0.0378 | 127.27 | 2800 | 0.6971 | 0.4615 |
| 0.0319 | 136.36 | 3000 | 0.7119 | 0.4475 |
| 0.0305 | 145.44 | 3200 | 0.7181 | 0.459 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
### Evaluation Command
!python eval.py \
--model_id DrishtiSharma/wav2vec2-xls-r-300m-kk-n2 \
--dataset mozilla-foundation/common_voice_8_0 --config kk --split test --log_outputs | {"language": ["kk"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "kk", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-kk-with-LM", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ru"}, "metrics": [{"type": "wer", "value": 0.4355, "name": "Test WER"}, {"type": "cer", "value": 0.10469915859660263, "name": "Test CER"}, {"type": "wer", "value": 0.417, "name": "Test WER (+LM)"}, {"type": "cer", "value": 0.10319098269566598, "name": "Test CER (+LM)"}, {"type": "wer", "value": 41.7, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "kk"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "kk"}, "metrics": [{"type": "wer", "value": 67.09, "name": "Test WER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-kk-with-LM | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"kk",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"kk"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #kk #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - KK dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7149
* Wer: 0.451
Evaluation Commands
===================
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-kk-with-LM --dataset mozilla-foundation/common\_voice\_8\_0 --config kk --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Kazakh language isn't available in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000222
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 150.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
### Evaluation Command
!python URL
--model\_id DrishtiSharma/wav2vec2-xls-r-300m-kk-n2
--dataset mozilla-foundation/common\_voice\_8\_0 --config kk --split test --log\_outputs
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000222\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 150.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0",
"### Evaluation Command\n\n\n!python URL \n\n--model\\_id DrishtiSharma/wav2vec2-xls-r-300m-kk-n2 \n\n--dataset mozilla-foundation/common\\_voice\\_8\\_0 --config kk --split test --log\\_outputs"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #kk #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000222\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 150.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0",
"### Evaluation Command\n\n\n!python URL \n\n--model\\_id DrishtiSharma/wav2vec2-xls-r-300m-kk-n2 \n\n--dataset mozilla-foundation/common\\_voice\\_8\\_0 --config kk --split test --log\\_outputs"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-maltese
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2994
- Wer: 0.2781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1800
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.0174 | 9.01 | 1000 | 3.0552 | 1.0 |
| 1.0446 | 18.02 | 2000 | 0.6708 | 0.7577 |
| 0.7995 | 27.03 | 3000 | 0.4202 | 0.4770 |
| 0.6978 | 36.04 | 4000 | 0.3054 | 0.3494 |
| 0.6189 | 45.05 | 5000 | 0.2878 | 0.3154 |
| 0.5667 | 54.05 | 6000 | 0.3114 | 0.3286 |
| 0.5173 | 63.06 | 7000 | 0.3085 | 0.3021 |
| 0.4682 | 72.07 | 8000 | 0.3058 | 0.2969 |
| 0.451 | 81.08 | 9000 | 0.3146 | 0.2907 |
| 0.4213 | 90.09 | 10000 | 0.3030 | 0.2881 |
| 0.4005 | 99.1 | 11000 | 0.3001 | 0.2789 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
### Evaluation Script
!python eval.py \
--model_id DrishtiSharma/wav2vec2-large-xls-r-300m-maltese \
--dataset mozilla-foundation/common_voice_8_0 --config mt --split test --log_outputs | {"language": ["mt"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "mt", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"]} | DrishtiSharma/wav2vec2-large-xls-r-300m-maltese | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"mt",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"mt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #mt #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-maltese
=================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - MT dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2994
* Wer: 0.2781
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7e-05
* train\_batch\_size: 32
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1800
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
### Evaluation Script
!python URL
--model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-maltese
--dataset mozilla-foundation/common\_voice\_8\_0 --config mt --split test --log\_outputs
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1800\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0",
"### Evaluation Script\n\n\n!python URL \n\n--model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-maltese \n\n--dataset mozilla-foundation/common\\_voice\\_8\\_0 --config mt --split test --log\\_outputs"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #mt #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1800\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0",
"### Evaluation Script\n\n\n!python URL \n\n--model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-maltese \n\n--dataset mozilla-foundation/common\\_voice\\_8\\_0 --config mt --split test --log\\_outputs"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-mr-v2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8729
- Wer: 0.4942
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 --dataset mozilla-foundation/common_voice_8_0 --config mr --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 --dataset speech-recognition-community-v2/dev_data --config mr --split validation --chunk_length_s 10 --stride_length_s 1
Note: Marathi language not found in speech-recognition-community-v2/dev_data!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000333
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 8.4934 | 9.09 | 200 | 3.7326 | 1.0 |
| 3.4234 | 18.18 | 400 | 3.3383 | 0.9996 |
| 3.2628 | 27.27 | 600 | 2.7482 | 0.9992 |
| 1.7743 | 36.36 | 800 | 0.6755 | 0.6787 |
| 1.0346 | 45.45 | 1000 | 0.6067 | 0.6193 |
| 0.8137 | 54.55 | 1200 | 0.6228 | 0.5612 |
| 0.6637 | 63.64 | 1400 | 0.5976 | 0.5495 |
| 0.5563 | 72.73 | 1600 | 0.7009 | 0.5383 |
| 0.4844 | 81.82 | 1800 | 0.6662 | 0.5287 |
| 0.4057 | 90.91 | 2000 | 0.6911 | 0.5303 |
| 0.3582 | 100.0 | 2200 | 0.7207 | 0.5327 |
| 0.3163 | 109.09 | 2400 | 0.7107 | 0.5118 |
| 0.2761 | 118.18 | 2600 | 0.7538 | 0.5118 |
| 0.2415 | 127.27 | 2800 | 0.7850 | 0.5178 |
| 0.2127 | 136.36 | 3000 | 0.8016 | 0.5034 |
| 0.1873 | 145.45 | 3200 | 0.8302 | 0.5187 |
| 0.1723 | 154.55 | 3400 | 0.9085 | 0.5223 |
| 0.1498 | 163.64 | 3600 | 0.8396 | 0.5126 |
| 0.1425 | 172.73 | 3800 | 0.8776 | 0.5094 |
| 0.1258 | 181.82 | 4000 | 0.8651 | 0.5014 |
| 0.117 | 190.91 | 4200 | 0.8772 | 0.4970 |
| 0.1093 | 200.0 | 4400 | 0.8729 | 0.4942 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
| {"language": ["mr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "mr", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-mr-v2", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "mr"}, "metrics": [{"type": "wer", "value": 0.49378259125551544, "name": "Test WER"}, {"type": "cer", "value": 0.12470799640610962, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "mr"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"mr",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"mr"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #mr #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-mr-v2
===============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - MR dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8729
* Wer: 0.4942
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 --dataset mozilla-foundation/common\_voice\_8\_0 --config mr --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 --dataset speech-recognition-community-v2/dev\_data --config mr --split validation --chunk\_length\_s 10 --stride\_length\_s 1
Note: Marathi language not found in speech-recognition-community-v2/dev\_data!
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000333
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 200
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.1
* Pytorch 1.10.0+cu111
* Datasets 1.18.2
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config mr --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 --dataset speech-recognition-community-v2/dev\\_data --config mr --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1\n\n\nNote: Marathi language not found in speech-recognition-community-v2/dev\\_data!",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000333\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #mr #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config mr --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 --dataset speech-recognition-community-v2/dev\\_data --config mr --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1\n\n\nNote: Marathi language not found in speech-recognition-community-v2/dev\\_data!",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000333\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-myv-v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MYV dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8537
- Wer: 0.6160
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 --dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Erzya language not found in speech-recognition-community-v2/dev_data!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000222
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 150
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 19.453 | 1.92 | 50 | 16.4001 | 1.0 |
| 9.6875 | 3.85 | 100 | 5.4468 | 1.0 |
| 4.9988 | 5.77 | 150 | 4.3507 | 1.0 |
| 4.1148 | 7.69 | 200 | 3.6753 | 1.0 |
| 3.4922 | 9.62 | 250 | 3.3103 | 1.0 |
| 3.2443 | 11.54 | 300 | 3.1741 | 1.0 |
| 3.164 | 13.46 | 350 | 3.1346 | 1.0 |
| 3.0954 | 15.38 | 400 | 3.0428 | 1.0 |
| 3.0076 | 17.31 | 450 | 2.9137 | 1.0 |
| 2.6883 | 19.23 | 500 | 2.1476 | 0.9978 |
| 1.5124 | 21.15 | 550 | 0.8955 | 0.8225 |
| 0.8711 | 23.08 | 600 | 0.6948 | 0.7591 |
| 0.6695 | 25.0 | 650 | 0.6683 | 0.7636 |
| 0.5606 | 26.92 | 700 | 0.6821 | 0.7435 |
| 0.503 | 28.85 | 750 | 0.7220 | 0.7516 |
| 0.4528 | 30.77 | 800 | 0.6638 | 0.7324 |
| 0.4219 | 32.69 | 850 | 0.7120 | 0.7435 |
| 0.4109 | 34.62 | 900 | 0.7122 | 0.7511 |
| 0.3887 | 36.54 | 950 | 0.7179 | 0.7199 |
| 0.3895 | 38.46 | 1000 | 0.7322 | 0.7525 |
| 0.391 | 40.38 | 1050 | 0.6850 | 0.7364 |
| 0.3537 | 42.31 | 1100 | 0.7571 | 0.7279 |
| 0.3267 | 44.23 | 1150 | 0.7575 | 0.7257 |
| 0.3195 | 46.15 | 1200 | 0.7580 | 0.6998 |
| 0.2891 | 48.08 | 1250 | 0.7452 | 0.7101 |
| 0.294 | 50.0 | 1300 | 0.7316 | 0.6945 |
| 0.2854 | 51.92 | 1350 | 0.7241 | 0.6757 |
| 0.2801 | 53.85 | 1400 | 0.7532 | 0.6887 |
| 0.2502 | 55.77 | 1450 | 0.7587 | 0.6811 |
| 0.2427 | 57.69 | 1500 | 0.7231 | 0.6851 |
| 0.2311 | 59.62 | 1550 | 0.7288 | 0.6632 |
| 0.2176 | 61.54 | 1600 | 0.7711 | 0.6664 |
| 0.2117 | 63.46 | 1650 | 0.7914 | 0.6940 |
| 0.2114 | 65.38 | 1700 | 0.8065 | 0.6918 |
| 0.1913 | 67.31 | 1750 | 0.8372 | 0.6945 |
| 0.1897 | 69.23 | 1800 | 0.8051 | 0.6869 |
| 0.1865 | 71.15 | 1850 | 0.8076 | 0.6740 |
| 0.1844 | 73.08 | 1900 | 0.7935 | 0.6708 |
| 0.1757 | 75.0 | 1950 | 0.8015 | 0.6610 |
| 0.1636 | 76.92 | 2000 | 0.7614 | 0.6414 |
| 0.1637 | 78.85 | 2050 | 0.8123 | 0.6592 |
| 0.1599 | 80.77 | 2100 | 0.7907 | 0.6566 |
| 0.1498 | 82.69 | 2150 | 0.8641 | 0.6757 |
| 0.1545 | 84.62 | 2200 | 0.7438 | 0.6682 |
| 0.1433 | 86.54 | 2250 | 0.8014 | 0.6624 |
| 0.1427 | 88.46 | 2300 | 0.7758 | 0.6646 |
| 0.1423 | 90.38 | 2350 | 0.7741 | 0.6423 |
| 0.1298 | 92.31 | 2400 | 0.7938 | 0.6414 |
| 0.1111 | 94.23 | 2450 | 0.7976 | 0.6467 |
| 0.1243 | 96.15 | 2500 | 0.7916 | 0.6481 |
| 0.1215 | 98.08 | 2550 | 0.7594 | 0.6392 |
| 0.113 | 100.0 | 2600 | 0.8236 | 0.6392 |
| 0.1077 | 101.92 | 2650 | 0.7959 | 0.6347 |
| 0.0988 | 103.85 | 2700 | 0.8189 | 0.6392 |
| 0.0953 | 105.77 | 2750 | 0.8157 | 0.6414 |
| 0.0889 | 107.69 | 2800 | 0.7946 | 0.6369 |
| 0.0929 | 109.62 | 2850 | 0.8255 | 0.6360 |
| 0.0822 | 111.54 | 2900 | 0.8320 | 0.6334 |
| 0.086 | 113.46 | 2950 | 0.8539 | 0.6490 |
| 0.0825 | 115.38 | 3000 | 0.8438 | 0.6418 |
| 0.0727 | 117.31 | 3050 | 0.8568 | 0.6481 |
| 0.0717 | 119.23 | 3100 | 0.8447 | 0.6512 |
| 0.0815 | 121.15 | 3150 | 0.8470 | 0.6445 |
| 0.0689 | 123.08 | 3200 | 0.8264 | 0.6249 |
| 0.0726 | 125.0 | 3250 | 0.7981 | 0.6169 |
| 0.0648 | 126.92 | 3300 | 0.8237 | 0.6200 |
| 0.0632 | 128.85 | 3350 | 0.8416 | 0.6249 |
| 0.06 | 130.77 | 3400 | 0.8276 | 0.6173 |
| 0.0616 | 132.69 | 3450 | 0.8429 | 0.6209 |
| 0.0614 | 134.62 | 3500 | 0.8485 | 0.6271 |
| 0.0539 | 136.54 | 3550 | 0.8598 | 0.6218 |
| 0.0555 | 138.46 | 3600 | 0.8557 | 0.6169 |
| 0.0604 | 140.38 | 3650 | 0.8436 | 0.6186 |
| 0.0556 | 142.31 | 3700 | 0.8428 | 0.6178 |
| 0.051 | 144.23 | 3750 | 0.8440 | 0.6142 |
| 0.0526 | 146.15 | 3800 | 0.8566 | 0.6142 |
| 0.052 | 148.08 | 3850 | 0.8544 | 0.6178 |
| 0.0519 | 150.0 | 3900 | 0.8537 | 0.6160 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
| {"language": ["myv"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "myv", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-myv-v1", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "myv"}, "metrics": [{"type": "wer", "value": 0.599548532731377, "name": "Test WER"}, {"type": "cer", "value": 0.12953851902597, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "myv"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"myv",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"myv"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #myv #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| wav2vec2-large-xls-r-300m-myv-v1
================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - MYV dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8537
* Wer: 0.6160
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 --dataset mozilla-foundation/common\_voice\_8\_0 --config myv --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Erzya language not found in speech-recognition-community-v2/dev\_data!
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000222
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 150
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.2
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config myv --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nErzya language not found in speech-recognition-community-v2/dev\\_data!",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000222\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 150\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #myv #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config myv --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nErzya language not found in speech-recognition-community-v2/dev\\_data!",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000222\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 150\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-or-d5
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - OR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9571
- Wer: 0.5450
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 --dataset mozilla-foundation/common_voice_8_0 --config or --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 --dataset speech-recognition-community-v2/dev_data --config or --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000111
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.2958 | 12.5 | 300 | 4.9014 | 1.0 |
| 3.4065 | 25.0 | 600 | 3.5150 | 1.0 |
| 1.5402 | 37.5 | 900 | 0.8356 | 0.7249 |
| 0.6049 | 50.0 | 1200 | 0.7754 | 0.6349 |
| 0.4074 | 62.5 | 1500 | 0.7994 | 0.6217 |
| 0.3097 | 75.0 | 1800 | 0.8815 | 0.5985 |
| 0.2593 | 87.5 | 2100 | 0.8532 | 0.5754 |
| 0.2097 | 100.0 | 2400 | 0.9077 | 0.5648 |
| 0.1784 | 112.5 | 2700 | 0.9047 | 0.5668 |
| 0.1567 | 125.0 | 3000 | 0.9019 | 0.5728 |
| 0.1315 | 137.5 | 3300 | 0.9295 | 0.5827 |
| 0.1125 | 150.0 | 3600 | 0.9256 | 0.5681 |
| 0.1035 | 162.5 | 3900 | 0.9148 | 0.5496 |
| 0.0901 | 175.0 | 4200 | 0.9480 | 0.5483 |
| 0.0817 | 187.5 | 4500 | 0.9799 | 0.5516 |
| 0.079 | 200.0 | 4800 | 0.9571 | 0.5450 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["or"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "or", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-or-d5", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "or"}, "metrics": [{"type": "wer", "value": 0.579136690647482, "name": "Test WER"}, {"type": "cer", "value": 0.1572148018392818, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "or"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"or",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"or"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #or #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-or-d5
===============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - OR dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9571
* Wer: 0.5450
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 --dataset mozilla-foundation/common\_voice\_8\_0 --config or --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 --dataset speech-recognition-community-v2/dev\_data --config or --split validation --chunk\_length\_s 10 --stride\_length\_s 1
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000111
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 800
* num\_epochs: 200
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config or --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 --dataset speech-recognition-community-v2/dev\\_data --config or --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000111\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 800\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #or #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config or --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 --dataset speech-recognition-community-v2/dev\\_data --config or --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000111\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 800\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-or-dx12
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4638
- Wer: 0.5602
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-dx12 --dataset mozilla-foundation/common_voice_8_0 --config or --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Oriya language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 13.5059 | 4.17 | 100 | 10.3789 | 1.0 |
| 4.5964 | 8.33 | 200 | 4.3294 | 1.0 |
| 3.4448 | 12.5 | 300 | 3.7903 | 1.0 |
| 3.3683 | 16.67 | 400 | 3.5289 | 1.0 |
| 2.042 | 20.83 | 500 | 1.1531 | 0.7857 |
| 0.5721 | 25.0 | 600 | 1.0267 | 0.7646 |
| 0.3274 | 29.17 | 700 | 1.0773 | 0.6938 |
| 0.2466 | 33.33 | 800 | 1.0323 | 0.6647 |
| 0.2047 | 37.5 | 900 | 1.1255 | 0.6733 |
| 0.1847 | 41.67 | 1000 | 1.1194 | 0.6515 |
| 0.1453 | 45.83 | 1100 | 1.1215 | 0.6601 |
| 0.1367 | 50.0 | 1200 | 1.1898 | 0.6627 |
| 0.1334 | 54.17 | 1300 | 1.3082 | 0.6687 |
| 0.1041 | 58.33 | 1400 | 1.2514 | 0.6177 |
| 0.1024 | 62.5 | 1500 | 1.2055 | 0.6528 |
| 0.0919 | 66.67 | 1600 | 1.4125 | 0.6369 |
| 0.074 | 70.83 | 1700 | 1.4006 | 0.6634 |
| 0.0681 | 75.0 | 1800 | 1.3943 | 0.6131 |
| 0.0709 | 79.17 | 1900 | 1.3545 | 0.6296 |
| 0.064 | 83.33 | 2000 | 1.2437 | 0.6237 |
| 0.0552 | 87.5 | 2100 | 1.3762 | 0.6190 |
| 0.056 | 91.67 | 2200 | 1.3763 | 0.6323 |
| 0.0514 | 95.83 | 2300 | 1.2897 | 0.6164 |
| 0.0409 | 100.0 | 2400 | 1.4257 | 0.6104 |
| 0.0379 | 104.17 | 2500 | 1.4219 | 0.5853 |
| 0.0367 | 108.33 | 2600 | 1.4361 | 0.6032 |
| 0.0412 | 112.5 | 2700 | 1.4713 | 0.6098 |
| 0.0353 | 116.67 | 2800 | 1.4132 | 0.6369 |
| 0.0336 | 120.83 | 2900 | 1.5210 | 0.6098 |
| 0.0302 | 125.0 | 3000 | 1.4686 | 0.5939 |
| 0.0398 | 129.17 | 3100 | 1.5456 | 0.6204 |
| 0.0291 | 133.33 | 3200 | 1.4111 | 0.5827 |
| 0.0247 | 137.5 | 3300 | 1.3866 | 0.6151 |
| 0.0196 | 141.67 | 3400 | 1.4513 | 0.5880 |
| 0.0218 | 145.83 | 3500 | 1.5100 | 0.5899 |
| 0.0196 | 150.0 | 3600 | 1.4936 | 0.5999 |
| 0.0164 | 154.17 | 3700 | 1.5012 | 0.5701 |
| 0.0168 | 158.33 | 3800 | 1.5601 | 0.5919 |
| 0.0151 | 162.5 | 3900 | 1.4891 | 0.5761 |
| 0.0137 | 166.67 | 4000 | 1.4839 | 0.5800 |
| 0.0143 | 170.83 | 4100 | 1.4826 | 0.5754 |
| 0.0114 | 175.0 | 4200 | 1.4950 | 0.5708 |
| 0.0092 | 179.17 | 4300 | 1.5008 | 0.5694 |
| 0.0104 | 183.33 | 4400 | 1.4774 | 0.5728 |
| 0.0096 | 187.5 | 4500 | 1.4948 | 0.5767 |
| 0.0105 | 191.67 | 4600 | 1.4557 | 0.5694 |
| 0.009 | 195.83 | 4700 | 1.4615 | 0.5628 |
| 0.0081 | 200.0 | 4800 | 1.4638 | 0.5602 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["or"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "or", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-or-dx12", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "or"}, "metrics": [{"type": "wer", "value": 0.5947242206235012, "name": "Test WER"}, {"type": "cer", "value": 0.18272388876724327, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "or"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-or-dx12 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"or",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"or"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #or #robust-speech-event #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-or-dx12
=================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4638
* Wer: 0.5602
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-dx12 --dataset mozilla-foundation/common\_voice\_8\_0 --config or --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Oriya language isn't available in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0004
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 200
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-dx12 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config or --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nOriya language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0004\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #or #robust-speech-event #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-dx12 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config or --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nOriya language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0004\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0855
- Wer: 0.4755
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-pa-IN-dx1 --dataset mozilla-foundation/common_voice_8_0 --config pa-IN --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Punjabi language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1200
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4607 | 9.26 | 500 | 2.7746 | 1.0416 |
| 0.3442 | 18.52 | 1000 | 0.9114 | 0.5911 |
| 0.2213 | 27.78 | 1500 | 0.9687 | 0.5751 |
| 0.1242 | 37.04 | 2000 | 1.0204 | 0.5461 |
| 0.0998 | 46.3 | 2500 | 1.0250 | 0.5233 |
| 0.0727 | 55.56 | 3000 | 1.1072 | 0.5382 |
| 0.0605 | 64.81 | 3500 | 1.0588 | 0.5073 |
| 0.0458 | 74.07 | 4000 | 1.0818 | 0.5069 |
| 0.0338 | 83.33 | 4500 | 1.0948 | 0.5108 |
| 0.0223 | 92.59 | 5000 | 1.0986 | 0.4775 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["pa-IN"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "pa-IN", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-pa-IN-dx1", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "pa-IN"}, "metrics": [{"type": "wer", "value": 0.48725989807918463, "name": "Test WER"}, {"type": "cer", "value": 0.1687305197540224, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "pa-IN"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-pa-IN-dx1 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"pa-IN",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"pa-IN"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #pa-IN #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0855
* Wer: 0.4755
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-pa-IN-dx1 --dataset mozilla-foundation/common\_voice\_8\_0 --config pa-IN --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Punjabi language isn't available in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1200
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-pa-IN-dx1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config pa-IN --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nPunjabi language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1200\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #pa-IN #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-pa-IN-dx1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config pa-IN --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nPunjabi language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1200\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-sat-a3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SAT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8961
- Wer: 0.3976
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3 --dataset mozilla-foundation/common_voice_8_0 --config sat --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Note: Santali (Ol Chiki) language not found in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 11.1266 | 33.29 | 100 | 2.8577 | 1.0 |
| 2.1549 | 66.57 | 200 | 1.0799 | 0.5542 |
| 0.5628 | 99.86 | 300 | 0.7973 | 0.4016 |
| 0.0779 | 133.29 | 400 | 0.8424 | 0.4177 |
| 0.0404 | 166.57 | 500 | 0.9048 | 0.4137 |
| 0.0212 | 199.86 | 600 | 0.8961 | 0.3976 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["sat"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "sat", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-sat-a3", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "sat"}, "metrics": [{"type": "wer", "value": 0.357429718875502, "name": "Test WER"}, {"type": "cer", "value": 0.14203730272596843, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "sat"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"sat",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sat"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #sat #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-sat-a3
================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - SAT dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8961
* Wer: 0.3976
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3 --dataset mozilla-foundation/common\_voice\_8\_0 --config sat --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Note: Santali (Ol Chiki) language not found in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0004
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 200
* num\_epochs: 200
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config sat --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nNote: Santali (Ol Chiki) language not found in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0004\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #sat #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config sat --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nNote: Santali (Ol Chiki) language not found in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0004\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-sat-final
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SAT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8012
- Wer: 0.3815
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final --dataset mozilla-foundation/common_voice_8_0 --config sat --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final --dataset speech-recognition-community-v2/dev_data --config sat --split validation --chunk_length_s 10 --stride_length_s 1
**Note: Santali (Ol Chiki) language not found in speech-recognition-community-v2/dev_data**
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 170
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 10.6317 | 33.29 | 100 | 2.8629 | 1.0 |
| 2.047 | 66.57 | 200 | 0.9516 | 0.5703 |
| 0.4475 | 99.86 | 300 | 0.8539 | 0.3896 |
| 0.0716 | 133.29 | 400 | 0.8277 | 0.3454 |
| 0.047 | 166.57 | 500 | 0.7597 | 0.3655 |
| 0.0249 | 199.86 | 600 | 0.8012 | 0.3815 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["sat"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "sat", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-sat-final", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "sat"}, "metrics": [{"type": "wer", "value": 0.3493975903614458, "name": "Test WER"}, {"type": "cer", "value": 0.13773314203730272, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "sat"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"sat",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sat"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #sat #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-sat-final
===================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - SAT dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8012
* Wer: 0.3815
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final --dataset mozilla-foundation/common\_voice\_8\_0 --config sat --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final --dataset speech-recognition-community-v2/dev\_data --config sat --split validation --chunk\_length\_s 10 --stride\_length\_s 1
Note: Santali (Ol Chiki) language not found in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0004
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 170
* num\_epochs: 200
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config sat --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final --dataset speech-recognition-community-v2/dev\\_data --config sat --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1\n\n\nNote: Santali (Ol Chiki) language not found in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0004\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 170\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #sat #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config sat --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final --dataset speech-recognition-community-v2/dev\\_data --config sat --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1\n\n\nNote: Santali (Ol Chiki) language not found in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0004\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 170\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2756
- Wer: 0.2279
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1 --dataset speech-recognition-community-v2/dev_data --config sl --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3881 | 6.1 | 500 | 2.9710 | 1.0 |
| 2.6401 | 12.2 | 1000 | 1.7677 | 0.9734 |
| 1.5152 | 18.29 | 1500 | 0.5564 | 0.6011 |
| 1.2191 | 24.39 | 2000 | 0.4319 | 0.4390 |
| 1.0237 | 30.49 | 2500 | 0.3141 | 0.3175 |
| 0.8892 | 36.59 | 3000 | 0.2748 | 0.2689 |
| 0.8296 | 42.68 | 3500 | 0.2680 | 0.2534 |
| 0.7602 | 48.78 | 4000 | 0.2820 | 0.2506 |
| 0.7186 | 54.88 | 4500 | 0.2672 | 0.2398 |
| 0.6887 | 60.98 | 5000 | 0.2729 | 0.2402 |
| 0.6507 | 67.07 | 5500 | 0.2767 | 0.2361 |
| 0.6226 | 73.17 | 6000 | 0.2817 | 0.2332 |
| 0.6024 | 79.27 | 6500 | 0.2679 | 0.2279 |
| 0.5787 | 85.37 | 7000 | 0.2837 | 0.2316 |
| 0.5744 | 91.46 | 7500 | 0.2838 | 0.2284 |
| 0.5556 | 97.56 | 8000 | 0.2763 | 0.2281 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["sl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "sl"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-sl-with-LM-v1", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "sl"}, "metrics": [{"type": "wer", "value": 0.20626555409164105, "name": "Test WER"}, {"type": "cer", "value": 0.051648321634392154, "name": "Test CER"}, {"type": "wer", "value": 0.13482652613087395, "name": "Test WER (+LM)"}, {"type": "cer", "value": 0.038838663862562475, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "sl"}, "metrics": [{"type": "wer", "value": 0.5406156320830592, "name": "Dev WER"}, {"type": "cer", "value": 0.22249723590310583, "name": "Dev CER"}, {"type": "wer", "value": 0.49783147459727384, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 0.1591062599627158, "name": "Dev CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "sl"}, "metrics": [{"type": "wer", "value": 46.17, "name": "Test WER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"sl",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sl"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #sl #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - SL dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2756
* Wer: 0.2279
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1 --dataset mozilla-foundation/common\_voice\_8\_0 --config sl --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1 --dataset speech-recognition-community-v2/dev\_data --config sl --split validation --chunk\_length\_s 10 --stride\_length\_s 1
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config sl --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1 --dataset speech-recognition-community-v2/dev\\_data --config sl --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #sl #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config sl --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1 --dataset speech-recognition-community-v2/dev\\_data --config sl --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2855
- Wer: 0.2401
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 --dataset speech-recognition-community-v2/dev_data --config sl --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.9294 | 6.1 | 500 | 2.9712 | 1.0 |
| 2.8305 | 12.2 | 1000 | 1.7073 | 0.9479 |
| 1.4795 | 18.29 | 1500 | 0.5756 | 0.6397 |
| 1.3433 | 24.39 | 2000 | 0.4968 | 0.5424 |
| 1.1766 | 30.49 | 2500 | 0.4185 | 0.4743 |
| 1.0017 | 36.59 | 3000 | 0.3303 | 0.3578 |
| 0.9358 | 42.68 | 3500 | 0.3003 | 0.3051 |
| 0.8358 | 48.78 | 4000 | 0.3045 | 0.2884 |
| 0.7647 | 54.88 | 4500 | 0.2866 | 0.2677 |
| 0.7482 | 60.98 | 5000 | 0.2829 | 0.2585 |
| 0.6943 | 67.07 | 5500 | 0.2782 | 0.2478 |
| 0.6586 | 73.17 | 6000 | 0.2911 | 0.2537 |
| 0.6425 | 79.27 | 6500 | 0.2817 | 0.2462 |
| 0.6067 | 85.37 | 7000 | 0.2910 | 0.2436 |
| 0.5974 | 91.46 | 7500 | 0.2875 | 0.2430 |
| 0.5812 | 97.56 | 8000 | 0.2852 | 0.2396 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["sl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "sl"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-sl-with-LM-v2", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "sl"}, "metrics": [{"type": "wer", "value": 0.21695212999560826, "name": "Test WER"}, {"type": "cer", "value": 0.052850080572474256, "name": "Test CER"}, {"type": "wer", "value": 0.14551310203484116, "name": "Test WER (+LM)"}, {"type": "cer", "value": 0.03927566711277415, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "sl"}, "metrics": [{"type": "wer", "value": 0.560722380639029, "name": "Dev WER"}, {"type": "cer", "value": 0.2279626093074681, "name": "Dev CER"}, {"type": "wer", "value": 0.46486802661402354, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 0.21105136194592422, "name": "Dev CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "sl"}, "metrics": [{"type": "wer", "value": 46.69, "name": "Test WER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"sl",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sl"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #sl #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - SL dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2855
* Wer: 0.2401
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 --dataset mozilla-foundation/common\_voice\_8\_0 --config sl --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 --dataset speech-recognition-community-v2/dev\_data --config sl --split validation --chunk\_length\_s 10 --stride\_length\_s 1
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config sl --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 --dataset speech-recognition-community-v2/dev\\_data --config sl --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #sl #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config sl --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 --dataset speech-recognition-community-v2/dev\\_data --config sl --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-sr-v4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5570
- Wer: 0.3038
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4 --dataset mozilla-foundation/common_voice_8_0 --config sr --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4 --dataset speech-recognition-community-v2/dev_data --config sr --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.2934 | 7.5 | 300 | 2.9777 | 0.9995 |
| 1.5049 | 15.0 | 600 | 0.5036 | 0.4806 |
| 0.3263 | 22.5 | 900 | 0.5822 | 0.4055 |
| 0.2008 | 30.0 | 1200 | 0.5609 | 0.4032 |
| 0.1543 | 37.5 | 1500 | 0.5203 | 0.3710 |
| 0.1158 | 45.0 | 1800 | 0.6458 | 0.3985 |
| 0.0997 | 52.5 | 2100 | 0.6227 | 0.4013 |
| 0.0834 | 60.0 | 2400 | 0.6048 | 0.3836 |
| 0.0665 | 67.5 | 2700 | 0.6197 | 0.3686 |
| 0.0602 | 75.0 | 3000 | 0.5418 | 0.3453 |
| 0.0524 | 82.5 | 3300 | 0.5310 | 0.3486 |
| 0.0445 | 90.0 | 3600 | 0.5599 | 0.3374 |
| 0.0406 | 97.5 | 3900 | 0.5958 | 0.3327 |
| 0.0358 | 105.0 | 4200 | 0.6017 | 0.3262 |
| 0.0302 | 112.5 | 4500 | 0.5613 | 0.3248 |
| 0.0285 | 120.0 | 4800 | 0.5659 | 0.3462 |
| 0.0213 | 127.5 | 5100 | 0.5568 | 0.3206 |
| 0.0215 | 135.0 | 5400 | 0.6524 | 0.3472 |
| 0.0162 | 142.5 | 5700 | 0.6223 | 0.3458 |
| 0.0137 | 150.0 | 6000 | 0.6625 | 0.3313 |
| 0.0114 | 157.5 | 6300 | 0.5739 | 0.3336 |
| 0.0101 | 165.0 | 6600 | 0.5906 | 0.3285 |
| 0.008 | 172.5 | 6900 | 0.5982 | 0.3112 |
| 0.0076 | 180.0 | 7200 | 0.5399 | 0.3094 |
| 0.0071 | 187.5 | 7500 | 0.5387 | 0.2991 |
| 0.0057 | 195.0 | 7800 | 0.5570 | 0.3038 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
| {"language": ["sr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "sr"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-sr-v4", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "sr"}, "metrics": [{"type": "wer", "value": 0.303313, "name": "Test WER"}, {"type": "cer", "value": 0.1048951, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "sr"}, "metrics": [{"type": "wer", "value": 0.9486784706184245, "name": "Test WER"}, {"type": "cer", "value": 0.8084369606584945, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "sr"}, "metrics": [{"type": "wer", "value": 94.53, "name": "Test WER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"sr",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sr"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #sr #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-sr-v4
===============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - SR dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5570
* Wer: 0.3038
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4 --dataset mozilla-foundation/common\_voice\_8\_0 --config sr --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4 --dataset speech-recognition-community-v2/dev\_data --config sr --split validation --chunk\_length\_s 10 --stride\_length\_s 1
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 800
* num\_epochs: 200
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.2
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config sr --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4 --dataset speech-recognition-community-v2/dev\\_data --config sr --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 800\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #sr #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config sr --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4 --dataset speech-recognition-community-v2/dev\\_data --config sr --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 800\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-vot-final-a2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - VOT dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8745
- Wer: 0.8333
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-vot-final-a2 --dataset mozilla-foundation/common_voice_8_0 --config vot --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Votic language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 340
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 11.1216 | 33.33 | 100 | 4.2848 | 1.0 |
| 2.9982 | 66.67 | 200 | 2.8665 | 1.0 |
| 1.5476 | 100.0 | 300 | 2.3022 | 0.8889 |
| 0.2776 | 133.33 | 400 | 2.7480 | 0.8889 |
| 0.1136 | 166.67 | 500 | 2.5383 | 0.8889 |
| 0.0489 | 200.0 | 600 | 2.8745 | 0.8333 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["vot"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "vot", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-vot-final-a2", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "vot"}, "metrics": [{"type": "wer", "value": 0.8333333333333334, "name": "Test WER"}, {"type": "cer", "value": 0.48672566371681414, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "vot"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-large-xls-r-300m-vot-final-a2 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"vot",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"vot"
] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #vot #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-vot-final-a2
======================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - VOT dataset.
It achieves the following results on the evaluation set:
* Loss: 2.8745
* Wer: 0.8333
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-vot-final-a2 --dataset mozilla-foundation/common\_voice\_8\_0 --config vot --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Votic language isn't available in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0004
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 340
* num\_epochs: 200
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-vot-final-a2 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config vot --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nVotic language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0004\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 340\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #vot #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-vot-final-a2 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config vot --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nVotic language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0004\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 340\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - KK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7149
- Wer: 0.451
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-kk-n2 --dataset mozilla-foundation/common_voice_8_0 --config kk --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Kazakh language not found in speech-recognition-community-v2/dev_data!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000222
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 150.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 9.6799 | 9.09 | 200 | 3.6119 | 1.0 |
| 3.1332 | 18.18 | 400 | 2.5352 | 1.005 |
| 1.0465 | 27.27 | 600 | 0.6169 | 0.682 |
| 0.3452 | 36.36 | 800 | 0.6572 | 0.607 |
| 0.2575 | 45.44 | 1000 | 0.6527 | 0.578 |
| 0.2088 | 54.53 | 1200 | 0.6828 | 0.551 |
| 0.158 | 63.62 | 1400 | 0.7074 | 0.5575 |
| 0.1309 | 72.71 | 1600 | 0.6523 | 0.5595 |
| 0.1074 | 81.8 | 1800 | 0.7262 | 0.5415 |
| 0.087 | 90.89 | 2000 | 0.7199 | 0.521 |
| 0.0711 | 99.98 | 2200 | 0.7113 | 0.523 |
| 0.0601 | 109.09 | 2400 | 0.6863 | 0.496 |
| 0.0451 | 118.18 | 2600 | 0.6998 | 0.483 |
| 0.0378 | 127.27 | 2800 | 0.6971 | 0.4615 |
| 0.0319 | 136.36 | 3000 | 0.7119 | 0.4475 |
| 0.0305 | 145.44 | 3200 | 0.7181 | 0.459 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["kk"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "kk", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-xls-r-300m-kk-n2", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "tt"}, "metrics": [{"type": "wer", "value": 0.4355, "name": "Test WER"}, {"type": "cer", "value": 0.10469915859660263, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "vot"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-xls-r-300m-kk-n2 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"kk",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"kk"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #kk #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - KK dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7149
* Wer: 0.451
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-xls-r-300m-kk-n2 --dataset mozilla-foundation/common\_voice\_8\_0 --config kk --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Kazakh language not found in speech-recognition-community-v2/dev\_data!
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000222
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 150.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-xls-r-300m-kk-n2 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config kk --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nKazakh language not found in speech-recognition-community-v2/dev\\_data!",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000222\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 150.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #kk #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-xls-r-300m-kk-n2 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config kk --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nKazakh language not found in speech-recognition-community-v2/dev\\_data!",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000222\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 150.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1987
- Wer: 0.1920
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-mt-o1 --dataset mozilla-foundation/common_voice_8_0 --config mt --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Maltese language not found in speech-recognition-community-v2/dev_data!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.1721 | 18.02 | 2000 | 0.3831 | 0.4066 |
| 0.7849 | 36.04 | 4000 | 0.2191 | 0.2417 |
| 0.6723 | 54.05 | 6000 | 0.2056 | 0.2134 |
| 0.6015 | 72.07 | 8000 | 0.2008 | 0.2031 |
| 0.5386 | 90.09 | 10000 | 0.1967 | 0.1953 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["mt"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "mt", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-xls-r-300m-mt-o1", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "mt"}, "metrics": [{"type": "wer", "value": 0.2378369069146646, "name": "Test WER"}, {"type": "cer", "value": 0.050364163712536256, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "mt"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-xls-r-300m-mt-o1 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"mt",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"mt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #mt #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - MT dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1987
* Wer: 0.1920
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-xls-r-300m-mt-o1 --dataset mozilla-foundation/common\_voice\_8\_0 --config mt --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Maltese language not found in speech-recognition-community-v2/dev\_data!
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7e-05
* train\_batch\_size: 32
* eval\_batch\_size: 1
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-xls-r-300m-mt-o1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config mt --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nMaltese language not found in speech-recognition-community-v2/dev\\_data!",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #mt #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-xls-r-300m-mt-o1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config mt --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nMaltese language not found in speech-recognition-community-v2/dev\\_data!",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8881
- Wer: 0.4175
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-pa-IN-r5 --dataset mozilla-foundation/common_voice_8_0 --config pa-IN --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Punjabi language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000111
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 10.695 | 18.52 | 500 | 3.5681 | 1.0 |
| 3.2718 | 37.04 | 1000 | 2.3081 | 0.9643 |
| 0.8727 | 55.56 | 1500 | 0.7227 | 0.5147 |
| 0.3349 | 74.07 | 2000 | 0.7498 | 0.4959 |
| 0.2134 | 92.59 | 2500 | 0.7779 | 0.4720 |
| 0.1445 | 111.11 | 3000 | 0.8120 | 0.4594 |
| 0.1057 | 129.63 | 3500 | 0.8225 | 0.4610 |
| 0.0826 | 148.15 | 4000 | 0.8307 | 0.4351 |
| 0.0639 | 166.67 | 4500 | 0.8967 | 0.4316 |
| 0.0528 | 185.19 | 5000 | 0.8875 | 0.4238 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["pa-IN"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "pa-IN", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-xls-r-300m-pa-IN-r5", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "pa-IN"}, "metrics": [{"type": "wer", "value": 0.4186593492747942, "name": "Test WER"}, {"type": "cer", "value": 0.13301322550753938, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "pa-IN"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-xls-r-300m-pa-IN-r5 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"pa-IN",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"pa-IN"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #pa-IN #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8881
* Wer: 0.4175
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-xls-r-300m-pa-IN-r5 --dataset mozilla-foundation/common\_voice\_8\_0 --config pa-IN --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Punjabi language isn't available in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.000111
* train\_batch\_size: 16
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 200.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-xls-r-300m-pa-IN-r5 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config pa-IN --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nPunjabi language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000111\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 200.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #pa-IN #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-xls-r-300m-pa-IN-r5 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config pa-IN --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nPunjabi language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000111\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 200.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - RM-SURSILV dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2511
- Wer: 0.2415
#### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-rm-sursilv-d11 --dataset mozilla-foundation/common_voice_8_0 --config rm-sursilv --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Romansh-Sursilv language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 125.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 2.3958 | 17.44 | 1500 | 0.6808 | 0.6521 |
| 0.9663 | 34.88 | 3000 | 0.3023 | 0.3718 |
| 0.7963 | 52.33 | 4500 | 0.2588 | 0.3046 |
| 0.6893 | 69.77 | 6000 | 0.2436 | 0.2718 |
| 0.6148 | 87.21 | 7500 | 0.2521 | 0.2572 |
| 0.5556 | 104.65 | 9000 | 0.2490 | 0.2442 |
| 0.5258 | 122.09 | 10500 | 0.2515 | 0.2442 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["rm-sursilv"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer"], "model-index": [{"name": "wav2vec2-xls-r-300m-rm-sursilv-d11", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "rm-sursilv"}, "metrics": [{"type": "wer", "value": 0.24094169578811844, "name": "Test WER"}, {"type": "cer", "value": 0.049832791672554284, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "rm-sursilv"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-xls-r-300m-rm-sursilv-d11 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"rm-sursilv"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - RM-SURSILV dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2511
* Wer: 0.2415
#### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-xls-r-300m-rm-sursilv-d11 --dataset mozilla-foundation/common\_voice\_8\_0 --config rm-sursilv --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Romansh-Sursilv language isn't available in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7e-05
* train\_batch\_size: 32
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 125.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
| [
"#### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-xls-r-300m-rm-sursilv-d11 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config rm-sursilv --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nRomansh-Sursilv language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 125.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"#### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-xls-r-300m-rm-sursilv-d11 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config rm-sursilv --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nRomansh-Sursilv language isn't available in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 125.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - RM-VALLADER dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2754
- Wer: 0.2831
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-rm-vallader-d1 --dataset mozilla-foundation/common_voice_8_0 --config rm-vallader --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Romansh-Vallader language not found in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.927 | 15.15 | 500 | 2.9196 | 1.0 |
| 1.3835 | 30.3 | 1000 | 0.5879 | 0.5866 |
| 0.7415 | 45.45 | 1500 | 0.3077 | 0.3316 |
| 0.5575 | 60.61 | 2000 | 0.2735 | 0.2954 |
| 0.4581 | 75.76 | 2500 | 0.2707 | 0.2802 |
| 0.3977 | 90.91 | 3000 | 0.2785 | 0.2809 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["rm-vallader"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "rm-vallader", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-xls-r-300m-rm-vallader-d1", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "rm-vallader"}, "metrics": [{"type": "wer", "value": 0.26472007722007723, "name": "Test WER"}, {"type": "cer", "value": 0.05860608074430969, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "vot"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-xls-r-300m-rm-vallader-d1 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"rm-vallader",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"rm-vallader"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #rm-vallader #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - RM-VALLADER dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2754
* Wer: 0.2831
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-xls-r-300m-rm-vallader-d1 --dataset mozilla-foundation/common\_voice\_8\_0 --config rm-vallader --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Romansh-Vallader language not found in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-xls-r-300m-rm-vallader-d1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config rm-vallader --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nRomansh-Vallader language not found in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #rm-vallader #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-xls-r-300m-rm-vallader-d1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config rm-vallader --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nRomansh-Vallader language not found in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MYV dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0356
- Wer: 0.6524
### Evaluation Commands
**1. To evaluate on mozilla-foundation/common_voice_8_0 with test split**
python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-myv-a1 --dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs
**2. To evaluate on speech-recognition-community-v2/dev_data**
Erzya language not found in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 5.649 | 9.62 | 500 | 3.0038 | 1.0 |
| 1.6272 | 19.23 | 1000 | 0.7362 | 0.7819 |
| 1.1354 | 28.85 | 1500 | 0.6410 | 0.7111 |
| 1.0424 | 38.46 | 2000 | 0.6907 | 0.7431 |
| 0.9293 | 48.08 | 2500 | 0.7249 | 0.7102 |
| 0.8246 | 57.69 | 3000 | 0.7422 | 0.6966 |
| 0.7837 | 67.31 | 3500 | 0.7413 | 0.6813 |
| 0.7147 | 76.92 | 4000 | 0.7873 | 0.6930 |
| 0.6276 | 86.54 | 4500 | 0.8038 | 0.6677 |
| 0.6041 | 96.15 | 5000 | 0.8240 | 0.6831 |
| 0.5336 | 105.77 | 5500 | 0.8748 | 0.6749 |
| 0.4705 | 115.38 | 6000 | 0.9006 | 0.6497 |
| 0.43 | 125.0 | 6500 | 0.8954 | 0.6551 |
| 0.3859 | 134.62 | 7000 | 0.9074 | 0.6614 |
| 0.3342 | 144.23 | 7500 | 0.9693 | 0.6560 |
| 0.3155 | 153.85 | 8000 | 1.0073 | 0.6691 |
| 0.2673 | 163.46 | 8500 | 1.0170 | 0.6632 |
| 0.2409 | 173.08 | 9000 | 1.0304 | 0.6709 |
| 0.2189 | 182.69 | 9500 | 0.9965 | 0.6546 |
| 0.1973 | 192.31 | 10000 | 1.0360 | 0.6551 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
### Evaluation Command
!python eval.py \
--model_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 \
--dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs | {"language": ["myv"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "myv", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-xls-r-myv-a1", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "myv"}, "metrics": [{"type": "wer", "value": 0.6514672686230248, "name": "Test WER"}, {"type": "cer", "value": 0.17226131905088124, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "vot"}, "metrics": [{"type": "wer", "value": "NA", "name": "Test WER"}, {"type": "cer", "value": "NA", "name": "Test CER"}]}]}]} | DrishtiSharma/wav2vec2-xls-r-myv-a1 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"myv",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"myv"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #myv #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - MYV dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0356
* Wer: 0.6524
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-xls-r-myv-a1 --dataset mozilla-foundation/common\_voice\_8\_0 --config myv --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Erzya language not found in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0004
* train\_batch\_size: 16
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 800
* num\_epochs: 200.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
### Evaluation Command
!python URL
--model\_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1
--dataset mozilla-foundation/common\_voice\_8\_0 --config myv --split test --log\_outputs
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-xls-r-myv-a1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config myv --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nErzya language not found in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0004\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 800\n* num\\_epochs: 200.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0",
"### Evaluation Command\n\n\n!python URL \n\n--model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 \n\n--dataset mozilla-foundation/common\\_voice\\_8\\_0 --config myv --split test --log\\_outputs"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #myv #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-xls-r-myv-a1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config myv --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\nErzya language not found in speech-recognition-community-v2/dev\\_data",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0004\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 800\n* num\\_epochs: 200.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0",
"### Evaluation Command\n\n\n!python URL \n\n--model\\_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 \n\n--dataset mozilla-foundation/common\\_voice\\_8\\_0 --config myv --split test --log\\_outputs"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1508
- Wer: 0.4908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5841 | 9.26 | 500 | 3.2514 | 0.9941 |
| 0.3992 | 18.52 | 1000 | 0.8790 | 0.6107 |
| 0.2409 | 27.78 | 1500 | 1.0012 | 0.6366 |
| 0.1447 | 37.04 | 2000 | 1.0167 | 0.6276 |
| 0.1109 | 46.3 | 2500 | 1.0638 | 0.5653 |
| 0.0797 | 55.56 | 3000 | 1.1447 | 0.5715 |
| 0.0636 | 64.81 | 3500 | 1.1503 | 0.5316 |
| 0.0466 | 74.07 | 4000 | 1.2227 | 0.5386 |
| 0.0372 | 83.33 | 4500 | 1.1214 | 0.5225 |
| 0.0239 | 92.59 | 5000 | 1.1375 | 0.4998 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["pa-IN"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]} | DrishtiSharma/wav2vec2-xls-r-pa-IN-a1 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"pa-IN"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1508
* Wer: 0.4908
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1500
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2756
- Wer: 0.2279
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-sl-a1 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-sl-a1 --dataset speech-recognition-community-v2/dev_data --config sl --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3881 | 6.1 | 500 | 2.9710 | 1.0 |
| 2.6401 | 12.2 | 1000 | 1.7677 | 0.9734 |
| 1.5152 | 18.29 | 1500 | 0.5564 | 0.6011 |
| 1.2191 | 24.39 | 2000 | 0.4319 | 0.4390 |
| 1.0237 | 30.49 | 2500 | 0.3141 | 0.3175 |
| 0.8892 | 36.59 | 3000 | 0.2748 | 0.2689 |
| 0.8296 | 42.68 | 3500 | 0.2680 | 0.2534 |
| 0.7602 | 48.78 | 4000 | 0.2820 | 0.2506 |
| 0.7186 | 54.88 | 4500 | 0.2672 | 0.2398 |
| 0.6887 | 60.98 | 5000 | 0.2729 | 0.2402 |
| 0.6507 | 67.07 | 5500 | 0.2767 | 0.2361 |
| 0.6226 | 73.17 | 6000 | 0.2817 | 0.2332 |
| 0.6024 | 79.27 | 6500 | 0.2679 | 0.2279 |
| 0.5787 | 85.37 | 7000 | 0.2837 | 0.2316 |
| 0.5744 | 91.46 | 7500 | 0.2838 | 0.2284 |
| 0.5556 | 97.56 | 8000 | 0.2763 | 0.2281 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["sl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "sl"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-xls-r-sl-a1", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "sl"}, "metrics": [{"type": "wer", "value": 0.20626555409164105, "name": "Test WER"}, {"type": "cer", "value": 0.051648321634392154, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "sl"}, "metrics": [{"type": "wer", "value": 0.5406156320830592, "name": "Test WER"}, {"type": "cer", "value": 0.22249723590310583, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "sl"}, "metrics": [{"type": "wer", "value": 55.24, "name": "Test WER"}]}]}]} | DrishtiSharma/wav2vec2-xls-r-sl-a1 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"sl",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sl"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #sl #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - SL dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2756
* Wer: 0.2279
### Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-xls-r-sl-a1 --dataset mozilla-foundation/common\_voice\_8\_0 --config sl --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
python URL --model\_id DrishtiSharma/wav2vec2-xls-r-sl-a1 --dataset speech-recognition-community-v2/dev\_data --config sl --split validation --chunk\_length\_s 10 --stride\_length\_s 1
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
| [
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-xls-r-sl-a1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config sl --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-xls-r-sl-a1 --dataset speech-recognition-community-v2/dev\\_data --config sl --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #sl #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Evaluation Commands\n\n\n1. To evaluate on mozilla-foundation/common\\_voice\\_8\\_0 with test split\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-xls-r-sl-a1 --dataset mozilla-foundation/common\\_voice\\_8\\_0 --config sl --split test --log\\_outputs\n\n\n2. To evaluate on speech-recognition-community-v2/dev\\_data\n\n\npython URL --model\\_id DrishtiSharma/wav2vec2-xls-r-sl-a1 --dataset speech-recognition-community-v2/dev\\_data --config sl --split validation --chunk\\_length\\_s 10 --stride\\_length\\_s 1",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2855
- Wer: 0.2401
##Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-sl-a2 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Votic language not found in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.9294 | 6.1 | 500 | 2.9712 | 1.0 |
| 2.8305 | 12.2 | 1000 | 1.7073 | 0.9479 |
| 1.4795 | 18.29 | 1500 | 0.5756 | 0.6397 |
| 1.3433 | 24.39 | 2000 | 0.4968 | 0.5424 |
| 1.1766 | 30.49 | 2500 | 0.4185 | 0.4743 |
| 1.0017 | 36.59 | 3000 | 0.3303 | 0.3578 |
| 0.9358 | 42.68 | 3500 | 0.3003 | 0.3051 |
| 0.8358 | 48.78 | 4000 | 0.3045 | 0.2884 |
| 0.7647 | 54.88 | 4500 | 0.2866 | 0.2677 |
| 0.7482 | 60.98 | 5000 | 0.2829 | 0.2585 |
| 0.6943 | 67.07 | 5500 | 0.2782 | 0.2478 |
| 0.6586 | 73.17 | 6000 | 0.2911 | 0.2537 |
| 0.6425 | 79.27 | 6500 | 0.2817 | 0.2462 |
| 0.6067 | 85.37 | 7000 | 0.2910 | 0.2436 |
| 0.5974 | 91.46 | 7500 | 0.2875 | 0.2430 |
| 0.5812 | 97.56 | 8000 | 0.2852 | 0.2396 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["sl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "sl", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-xls-r-sl-a2", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "sl"}, "metrics": [{"type": "wer", "value": 0.21695212999560826, "name": "Test WER"}, {"type": "cer", "value": 0.052850080572474256, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "vot"}, "metrics": [{"type": "wer", "value": 0.560722380639029, "name": "Test WER"}, {"type": "cer", "value": 0.2279626093074681, "name": "Test CER"}, {"type": "wer", "value": 56.07, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "sl"}, "metrics": [{"type": "wer", "value": 56.19, "name": "Test WER"}]}]}]} | DrishtiSharma/wav2vec2-xls-r-sl-a2 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"sl",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sl"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #sl #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - SL dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2855
* Wer: 0.2401
##Evaluation Commands
1. To evaluate on mozilla-foundation/common\_voice\_8\_0 with test split
python URL --model\_id DrishtiSharma/wav2vec2-xls-r-sl-a2 --dataset mozilla-foundation/common\_voice\_8\_0 --config sl --split test --log\_outputs
2. To evaluate on speech-recognition-community-v2/dev\_data
Votic language not found in speech-recognition-community-v2/dev\_data
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #sl #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0604
- Precision: 0.9262
- Recall: 0.9375
- F1: 0.9318
- Accuracy: 0.9841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2424 | 1.0 | 878 | 0.0684 | 0.9096 | 0.9206 | 0.9150 | 0.9813 |
| 0.0524 | 2.0 | 1756 | 0.0607 | 0.9188 | 0.9349 | 0.9268 | 0.9835 |
| 0.0304 | 3.0 | 2634 | 0.0604 | 0.9262 | 0.9375 | 0.9318 | 0.9841 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9261715296198055, "name": "Precision"}, {"type": "recall", "value": 0.9374650408323079, "name": "Recall"}, {"type": "f1", "value": 0.9317840662700839, "name": "F1"}, {"type": "accuracy", "value": 0.9840659602522758, "name": "Accuracy"}]}]}]} | Duc/distilbert-base-uncased-finetuned-ner | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-ner
=====================================
This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0604
* Precision: 0.9262
* Recall: 0.9375
* F1: 0.9318
* Accuracy: 0.9841
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.9.0+cu111
* Datasets 1.15.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | DueLinx0402/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model | [
"# Harry Potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## This model achieves WER on common-voice ro test split of WER: 12.457178%
# wav2vec2-xls-r-300m-romanian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an common voice ro and RSS dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0836
- eval_wer: 0.0705
- eval_runtime: 160.4549
- eval_samples_per_second: 11.081
- eval_steps_per_second: 1.39
- epoch: 14.38
- step: 2703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 15
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
Used the following code for evaluation:
```
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ro", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Dumiiii/wav2vec2-xls-r-300m-romanian")
model = Wav2Vec2ForCTC.from_pretrained("Dumiiii/wav2vec2-xls-r-300m-romanian")
model.to("cuda")
chars_to_ignore_regex = '['+string.punctuation+']'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
Credits for evaluation: https://huggingface.co/anton-l | {"license": "apache-2.0", "tags": ["generated_from_trainer"]} | Dumiiii/wav2vec2-xls-r-300m-romanian | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
## This model achieves WER on common-voice ro test split of WER: 12.457178%
# wav2vec2-xls-r-300m-romanian
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an common voice ro and RSS dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0836
- eval_wer: 0.0705
- eval_runtime: 160.4549
- eval_samples_per_second: 11.081
- eval_steps_per_second: 1.39
- epoch: 14.38
- step: 2703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 15
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
Used the following code for evaluation:
Credits for evaluation: URL | [
"## This model achieves WER on common-voice ro test split of WER: 12.457178%",
"# wav2vec2-xls-r-300m-romanian\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an common voice ro and RSS dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.0836\n- eval_wer: 0.0705\n- eval_runtime: 160.4549\n- eval_samples_per_second: 11.081\n- eval_steps_per_second: 1.39\n- epoch: 14.38\n- step: 2703",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 15\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3\n\n\nUsed the following code for evaluation:\n\n\nCredits for evaluation: URL"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"## This model achieves WER on common-voice ro test split of WER: 12.457178%",
"# wav2vec2-xls-r-300m-romanian\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an common voice ro and RSS dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.0836\n- eval_wer: 0.0705\n- eval_runtime: 160.4549\n- eval_samples_per_second: 11.081\n- eval_steps_per_second: 1.39\n- epoch: 14.38\n- step: 2703",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 15\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3\n\n\nUsed the following code for evaluation:\n\n\nCredits for evaluation: URL"
] |
text-generation | transformers |
# Alexia Bot Testing | {} | Duugu/alexia-bot-test | null | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us
|
# Alexia Bot Testing | [
"# Alexia Bot Testing"
] | [
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us \n",
"# Alexia Bot Testing"
] |
text-generation | transformers |
# My Awesome Model | {"tags": ["conversational"]} | Duugu/jakebot3000 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Awesome Model | [
"# My Awesome Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Model"
] |
text-generation | transformers |
#Landcheese | {"tags": ["conversational"]} | Dyzi/DialoGPT-small-landcheese | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Landcheese | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# out
This model is a fine-tuned version of [/1TB_SSD/SB_AI/out_epoch1/out/checkpoint-1115000/](https://huggingface.co//1TB_SSD/SB_AI/out_epoch1/out/checkpoint-1115000/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 2518227880
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 0.0867 | 0.07 | 75000 | 0.0742 |
| 0.0783 | 0.13 | 150000 | 0.0695 |
| 0.0719 | 0.2 | 225000 | 0.0732 |
| 0.0743 | 0.27 | 300000 | 0.0663 |
| 0.0659 | 0.34 | 375000 | 0.0686 |
| 0.0664 | 0.4 | 450000 | 0.0683 |
| 0.0637 | 0.47 | 525000 | 0.0680 |
| 0.0655 | 0.54 | 600000 | 0.0641 |
| 0.0676 | 0.6 | 675000 | 0.0644 |
| 0.0704 | 0.67 | 750000 | 0.0645 |
| 0.0687 | 0.74 | 825000 | 0.0610 |
| 0.059 | 0.81 | 900000 | 0.0652 |
| 0.0666 | 0.87 | 975000 | 0.0619 |
| 0.0624 | 0.94 | 1050000 | 0.0619 |
| 0.0625 | 1.01 | 1125000 | 0.0667 |
| 0.0614 | 1.03 | 1150000 | 0.0658 |
| 0.0597 | 1.05 | 1175000 | 0.0683 |
| 0.0629 | 1.07 | 1200000 | 0.0691 |
| 0.0603 | 1.1 | 1225000 | 0.0678 |
| 0.0601 | 1.12 | 1250000 | 0.0746 |
| 0.0606 | 1.14 | 1275000 | 0.0691 |
| 0.0671 | 1.16 | 1300000 | 0.0702 |
| 0.0625 | 1.19 | 1325000 | 0.0661 |
| 0.0617 | 1.21 | 1350000 | 0.0688 |
| 0.0579 | 1.23 | 1375000 | 0.0679 |
| 0.0663 | 1.25 | 1400000 | 0.0634 |
| 0.0583 | 1.28 | 1425000 | 0.0638 |
| 0.0623 | 1.3 | 1450000 | 0.0681 |
| 0.0615 | 1.32 | 1475000 | 0.0670 |
| 0.0592 | 1.34 | 1500000 | 0.0666 |
| 0.0626 | 1.37 | 1525000 | 0.0666 |
| 0.063 | 1.39 | 1550000 | 0.0647 |
| 0.0648 | 1.41 | 1575000 | 0.0653 |
| 0.0611 | 1.43 | 1600000 | 0.0700 |
| 0.0622 | 1.46 | 1625000 | 0.0634 |
| 0.0617 | 1.48 | 1650000 | 0.0651 |
| 0.0613 | 1.5 | 1675000 | 0.0634 |
| 0.0639 | 1.52 | 1700000 | 0.0661 |
| 0.0615 | 1.54 | 1725000 | 0.0644 |
| 0.0605 | 1.57 | 1750000 | 0.0662 |
| 0.0622 | 1.59 | 1775000 | 0.0656 |
| 0.0585 | 1.61 | 1800000 | 0.0633 |
| 0.0628 | 1.63 | 1825000 | 0.0625 |
| 0.0638 | 1.66 | 1850000 | 0.0662 |
| 0.0599 | 1.68 | 1875000 | 0.0664 |
| 0.0583 | 1.7 | 1900000 | 0.0668 |
| 0.0543 | 1.72 | 1925000 | 0.0631 |
| 0.06 | 1.75 | 1950000 | 0.0629 |
| 0.0615 | 1.77 | 1975000 | 0.0644 |
| 0.0587 | 1.79 | 2000000 | 0.0663 |
| 0.0647 | 1.81 | 2025000 | 0.0654 |
| 0.0604 | 1.84 | 2050000 | 0.0639 |
| 0.0641 | 1.86 | 2075000 | 0.0636 |
| 0.0604 | 1.88 | 2100000 | 0.0636 |
| 0.0654 | 1.9 | 2125000 | 0.0652 |
| 0.0588 | 1.93 | 2150000 | 0.0638 |
| 0.0616 | 1.95 | 2175000 | 0.0657 |
| 0.0598 | 1.97 | 2200000 | 0.0646 |
| 0.0633 | 1.99 | 2225000 | 0.0645 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "out", "results": []}]} | EColi/sponsorblock-base-v1 | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
| out
===
This model is a fine-tuned version of /1TB\_SSD/SB\_AI/out\_epoch1/out/checkpoint-1115000/ on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0645
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 2518227880
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2.0
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.17.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 2518227880\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 2518227880\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
# Brooke DialoGPT Model | {"tags": ["conversational"]} | EEE/DialoGPT-medium-brooke | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Brooke DialoGPT Model | [
"# Brooke DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Brooke DialoGPT Model"
] |
text-generation | transformers |
# Aang DialoGPT Model | {"tags": ["conversational"]} | EEE/DialoGPT-small-aang | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Aang DialoGPT Model | [
"# Aang DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Aang DialoGPT Model"
] |
text-generation | transformers |
# Yoda DialoGPT Model | {"tags": ["conversational"]} | EEE/DialoGPT-small-yoda | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Yoda DialoGPT Model | [
"# Yoda DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Yoda DialoGPT Model"
] |
summarization | transformers |
**IMPORTANT:** On the 5th of April 2022, we detected a mistake in the configuration file; thus, the model was not generating the summaries correctly, and it was underperforming in all scenarios. For this reason, if you had used the model until that day, we would be glad if you would re-evaluate the model if you are publishing some results with it. We apologize for the inconvenience and thank you for your understanding.
# NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish
Most of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish.
# The NASca model
News Abstractive Summarization for Catalan (NASca) is a Transformer encoder-decoder model, with the same hyper-parameters than BART, to perform summarization of Catalan news articles. It is pre-trained on a combination of several self-supervised tasks that help to increase the abstractivity of the generated summaries. Four pre-training tasks have been combined: sentence permutation, text infilling, Gap Sentence Generation, and Next Segment Generation. Catalan newspapers, the Catalan subset of the OSCAR corpus and Wikipedia articles in Catalan were used for pre-training the model (9.3GB of raw text -2.5 millions of documents-).
NASca is finetuned for the summarization task on 636.596 (document, summary) pairs from the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA).
### BibTeX entry
```bibtex
@Article{app11219872,
AUTHOR = {Ahuir, Vicent and Hurtado, Lluís-F. and González, José Ángel and Segarra, Encarna},
TITLE = {NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish},
JOURNAL = {Applied Sciences},
VOLUME = {11},
YEAR = {2021},
NUMBER = {21},
ARTICLE-NUMBER = {9872},
URL = {https://www.mdpi.com/2076-3417/11/21/9872},
ISSN = {2076-3417},
DOI = {10.3390/app11219872}
}
``` | {"language": "ca", "tags": ["summarization"], "widget": [{"text": "La Universitat Polit\u00e8cnica de Val\u00e8ncia (UPV), a trav\u00e9s del projecte Atenea \u201cplataforma de dones, art i tecnologia\u201d i en col\u00b7laboraci\u00f3 amb les companyies tecnol\u00f2giques Metric Salad i Zetalab, ha digitalitzat i modelat en 3D per a la 35a edici\u00f3 del Festival Dansa Val\u00e8ncia, que se celebra del 2 al 10 d'abril, la primera pe\u00e7a de dansa en un metaverso espec\u00edfic. La pe\u00e7a No \u00e9s amor, dirigida per Lara Mis\u00f3, forma part de la programaci\u00f3 d'aquesta edici\u00f3 del Festival Dansa Val\u00e8ncia i explora la figura geom\u00e8trica del cercle des de totes les seues perspectives: espacial, corporal i compositiva. No \u00e9s amor est\u00e0 inspirada en el treball de l'artista japonesa Yayoi Kusama i mira de prop les diferents facetes d'una obsessi\u00f3. Aix\u00ed dona cabuda a la insist\u00e8ncia, la repetici\u00f3, el trastorn, la hipnosi i l'alliberament. El proc\u00e9s de digitalitzaci\u00f3, materialitzat per Metric Salad i ZetaLab, ha sigut complex respecte a uns altres ja realitzats a causa de l'enorme desafiament que comporta el modelatge en 3D de cossos en moviment al ritme de la composici\u00f3 de l'obra. L'objectiu era generar una experi\u00e8ncia el m\u00e9s realista possible i fidedigna de l'original perqu\u00e8 el resultat final fora un proc\u00e9s absolutament immersiu.Aix\u00ed, el metaverso est\u00e0 compost per figures modelades en 3D al costat de quatre projeccions digitalitzades en pantalles flotants amb les quals l'usuari podr\u00e0 interactuar segons es vaja acostant, b\u00e9 mitjan\u00e7ant els comandaments de l'ordinador, b\u00e9 a trav\u00e9s d'ulleres de realitat virtual. L'objectiu \u00e9s que quan l'usuari s'acoste a cadascuna de les projeccions tinga la sensaci\u00f3 d'una immersi\u00f3 quasi completa en fondre's amb el contingut audiovisual que li genere una experi\u00e8ncia intimista i molt real."}]} | ELiRF/NASCA | null | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"ca",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ca"
] | TAGS
#transformers #pytorch #safetensors #bart #text2text-generation #summarization #ca #autotrain_compatible #endpoints_compatible #region-us
|
IMPORTANT: On the 5th of April 2022, we detected a mistake in the configuration file; thus, the model was not generating the summaries correctly, and it was underperforming in all scenarios. For this reason, if you had used the model until that day, we would be glad if you would re-evaluate the model if you are publishing some results with it. We apologize for the inconvenience and thank you for your understanding.
# NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish
Most of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish.
# The NASca model
News Abstractive Summarization for Catalan (NASca) is a Transformer encoder-decoder model, with the same hyper-parameters than BART, to perform summarization of Catalan news articles. It is pre-trained on a combination of several self-supervised tasks that help to increase the abstractivity of the generated summaries. Four pre-training tasks have been combined: sentence permutation, text infilling, Gap Sentence Generation, and Next Segment Generation. Catalan newspapers, the Catalan subset of the OSCAR corpus and Wikipedia articles in Catalan were used for pre-training the model (9.3GB of raw text -2.5 millions of documents-).
NASca is finetuned for the summarization task on 636.596 (document, summary) pairs from the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA).
### BibTeX entry
| [
"# NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish\n\nMost of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish.",
"# The NASca model\nNews Abstractive Summarization for Catalan (NASca) is a Transformer encoder-decoder model, with the same hyper-parameters than BART, to perform summarization of Catalan news articles. It is pre-trained on a combination of several self-supervised tasks that help to increase the abstractivity of the generated summaries. Four pre-training tasks have been combined: sentence permutation, text infilling, Gap Sentence Generation, and Next Segment Generation. Catalan newspapers, the Catalan subset of the OSCAR corpus and Wikipedia articles in Catalan were used for pre-training the model (9.3GB of raw text -2.5 millions of documents-).\n\nNASca is finetuned for the summarization task on 636.596 (document, summary) pairs from the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA).",
"### BibTeX entry"
] | [
"TAGS\n#transformers #pytorch #safetensors #bart #text2text-generation #summarization #ca #autotrain_compatible #endpoints_compatible #region-us \n",
"# NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish\n\nMost of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish.",
"# The NASca model\nNews Abstractive Summarization for Catalan (NASca) is a Transformer encoder-decoder model, with the same hyper-parameters than BART, to perform summarization of Catalan news articles. It is pre-trained on a combination of several self-supervised tasks that help to increase the abstractivity of the generated summaries. Four pre-training tasks have been combined: sentence permutation, text infilling, Gap Sentence Generation, and Next Segment Generation. Catalan newspapers, the Catalan subset of the OSCAR corpus and Wikipedia articles in Catalan were used for pre-training the model (9.3GB of raw text -2.5 millions of documents-).\n\nNASca is finetuned for the summarization task on 636.596 (document, summary) pairs from the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA).",
"### BibTeX entry"
] |
summarization | transformers | **IMPORTANT:** On the 5th of April 2022, we detected a mistake in the configuration file; thus, the model was not generating the summaries correctly, and it was underperforming in all scenarios. For this reason, if you had used the model until that day, we would be glad if you would re-evaluate the model if you are publishing some results with it. We apologize for the inconvenience and thank you for your understanding.
# NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish
Most of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish.
# The NASes model
News Abstractive Summarization for Spanish (NASes) is a Transformer encoder-decoder model, with the same hyper-parameters than BART, to perform summarization of Spanish news articles. It is pre-trained on a combination of several self-supervised tasks that help to increase the abstractivity of the generated summaries. Four pre-training tasks have been combined: sentence permutation, text infilling, Gap Sentence Generation, and Next Segment Generation. Spanish newspapers, and Wikipedia articles in Spanish were used for pre-training the model (21GB of raw text -8.5 millions of documents-).
NASes is finetuned for the summarization task on 1.802.919 (document, summary) pairs from the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA).
### BibTeX entry
```bibtex
@Article{app11219872,
AUTHOR = {Ahuir, Vicent and Hurtado, Lluís-F. and González, José Ángel and Segarra, Encarna},
TITLE = {NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish},
JOURNAL = {Applied Sciences},
VOLUME = {11},
YEAR = {2021},
NUMBER = {21},
ARTICLE-NUMBER = {9872},
URL = {https://www.mdpi.com/2076-3417/11/21/9872},
ISSN = {2076-3417},
DOI = {10.3390/app11219872}
}
``` | {"language": "es", "tags": ["summarization"], "widget": [{"text": "La Agencia Valenciana de la Innovaci\u00f3n (AVI) financia el desarrollo de un software que integra diferentes modelos y tecnolog\u00edas para la monitorizaci\u00f3n y an\u00e1lisis multiling\u00fce de las redes sociales. A trav\u00e9s de t\u00e9cnicas de 'deep learning' y procesamiento del lenguaje natural es capaz de interpretar la iron\u00eda y las emociones en los textos, incluso en aquellos escritos en idiomas menos extendidos, a menudo no contemplados por las herramientas comerciales. La iniciativa, bautizada como 'Guaita', est\u00e1 liderada por el Instituto Valenciano de Investigaci\u00f3n en Inteligencia Artificial (VRAIN), adscrito a la Universidad Polit\u00e9cnica de Valencia (UPV), que cuenta a su vez para su desarrollo con la colaboraci\u00f3n del Instituto Valenciano de Inform\u00e1tica (ITI) y la Corporaci\u00f3n Valenciana de Mitjans de Comunicaci\u00f3n (CVMC).De este modo, y a solicitud del usuario o usuaria, monitorizar\u00e1 las redes sociales para obtener la informaci\u00f3n asociada a los temas objeto de inter\u00e9s y ofrecer\u00e1 los resultados de forma gr\u00e1fica, bien a trav\u00e9s de una interfaz web, bien mediante la generaci\u00f3n de informes. El programa ser\u00e1, adem\u00e1s, capaz de determinar la reputaci\u00f3n de una empresa o instituci\u00f3n a partir de dichos an\u00e1lisis gracias a la combinaci\u00f3n de distintas tecnolog\u00edas de procesamiento e interpretaci\u00f3n, destaca la agencia en un comunicado."}]} | ELiRF/NASES | null | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"es",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #safetensors #bart #text2text-generation #summarization #es #autotrain_compatible #endpoints_compatible #has_space #region-us
| IMPORTANT: On the 5th of April 2022, we detected a mistake in the configuration file; thus, the model was not generating the summaries correctly, and it was underperforming in all scenarios. For this reason, if you had used the model until that day, we would be glad if you would re-evaluate the model if you are publishing some results with it. We apologize for the inconvenience and thank you for your understanding.
# NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish
Most of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish.
# The NASes model
News Abstractive Summarization for Spanish (NASes) is a Transformer encoder-decoder model, with the same hyper-parameters than BART, to perform summarization of Spanish news articles. It is pre-trained on a combination of several self-supervised tasks that help to increase the abstractivity of the generated summaries. Four pre-training tasks have been combined: sentence permutation, text infilling, Gap Sentence Generation, and Next Segment Generation. Spanish newspapers, and Wikipedia articles in Spanish were used for pre-training the model (21GB of raw text -8.5 millions of documents-).
NASes is finetuned for the summarization task on 1.802.919 (document, summary) pairs from the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA).
### BibTeX entry
| [
"# NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish\n\nMost of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish.",
"# The NASes model\n\nNews Abstractive Summarization for Spanish (NASes) is a Transformer encoder-decoder model, with the same hyper-parameters than BART, to perform summarization of Spanish news articles. It is pre-trained on a combination of several self-supervised tasks that help to increase the abstractivity of the generated summaries. Four pre-training tasks have been combined: sentence permutation, text infilling, Gap Sentence Generation, and Next Segment Generation. Spanish newspapers, and Wikipedia articles in Spanish were used for pre-training the model (21GB of raw text -8.5 millions of documents-).\n\nNASes is finetuned for the summarization task on 1.802.919 (document, summary) pairs from the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA).",
"### BibTeX entry"
] | [
"TAGS\n#transformers #pytorch #safetensors #bart #text2text-generation #summarization #es #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish\n\nMost of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish.",
"# The NASes model\n\nNews Abstractive Summarization for Spanish (NASes) is a Transformer encoder-decoder model, with the same hyper-parameters than BART, to perform summarization of Spanish news articles. It is pre-trained on a combination of several self-supervised tasks that help to increase the abstractivity of the generated summaries. Four pre-training tasks have been combined: sentence permutation, text infilling, Gap Sentence Generation, and Next Segment Generation. Spanish newspapers, and Wikipedia articles in Spanish were used for pre-training the model (21GB of raw text -8.5 millions of documents-).\n\nNASes is finetuned for the summarization task on 1.802.919 (document, summary) pairs from the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA).",
"### BibTeX entry"
] |
fill-mask | transformers | # CroSloEngual BERT
CroSloEngual BERT is a trilingual model, using bert-base architecture, trained on Croatian, Slovenian, and English corpora. Focusing on three languages, the model performs better than [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased), while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't.
Evaluation is presented in our article:
```
@Inproceedings{ulcar-robnik2020finest,
author = "Ulčar, M. and Robnik-Šikonja, M.",
year = 2020,
title = "{FinEst BERT} and {CroSloEngual BERT}: less is more in multilingual models",
editor = "Sojka, P and Kopeček, I and Pala, K and Horák, A",
booktitle = "Text, Speech, and Dialogue {TSD 2020}",
series = "Lecture Notes in Computer Science",
volume = 12284,
publisher = "Springer",
url = "https://doi.org/10.1007/978-3-030-58323-1_11",
}
```
The preprint is available at [arxiv.org/abs/2006.07890](https://arxiv.org/abs/2006.07890). | {"language": ["hr", "sl", "en", "multilingual"], "license": "cc-by-4.0"} | EMBEDDIA/crosloengual-bert | null | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"hr",
"sl",
"en",
"multilingual",
"arxiv:2006.07890",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2006.07890"
] | [
"hr",
"sl",
"en",
"multilingual"
] | TAGS
#transformers #pytorch #jax #bert #fill-mask #hr #sl #en #multilingual #arxiv-2006.07890 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| # CroSloEngual BERT
CroSloEngual BERT is a trilingual model, using bert-base architecture, trained on Croatian, Slovenian, and English corpora. Focusing on three languages, the model performs better than multilingual BERT, while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't.
Evaluation is presented in our article:
The preprint is available at URL | [
"# CroSloEngual BERT\nCroSloEngual BERT is a trilingual model, using bert-base architecture, trained on Croatian, Slovenian, and English corpora. Focusing on three languages, the model performs better than multilingual BERT, while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't. \n\nEvaluation is presented in our article:\n\nThe preprint is available at URL"
] | [
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #hr #sl #en #multilingual #arxiv-2006.07890 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# CroSloEngual BERT\nCroSloEngual BERT is a trilingual model, using bert-base architecture, trained on Croatian, Slovenian, and English corpora. Focusing on three languages, the model performs better than multilingual BERT, while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't. \n\nEvaluation is presented in our article:\n\nThe preprint is available at URL"
] |
fill-mask | transformers | # Usage
Load in transformers library with:
```
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("EMBEDDIA/est-roberta")
model = AutoModelForMaskedLM.from_pretrained("EMBEDDIA/est-roberta")
```
# Est-RoBERTa
Est-RoBERTa model is a monolingual Estonian BERT-like model. It is closely related to French Camembert model https://camembert-model.fr/. The Estonian corpora used for training the model have 2.51 billion tokens in total. The subword vocabulary contains 40,000 tokens.
Est-RoBERTa was trained for 40 epochs.
| {"language": ["et"], "license": "cc-by-sa-4.0"} | EMBEDDIA/est-roberta | null | [
"transformers",
"pytorch",
"camembert",
"fill-mask",
"et",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"et"
] | TAGS
#transformers #pytorch #camembert #fill-mask #et #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
| # Usage
Load in transformers library with:
# Est-RoBERTa
Est-RoBERTa model is a monolingual Estonian BERT-like model. It is closely related to French Camembert model URL The Estonian corpora used for training the model have 2.51 billion tokens in total. The subword vocabulary contains 40,000 tokens.
Est-RoBERTa was trained for 40 epochs.
| [
"# Usage\nLoad in transformers library with:",
"# Est-RoBERTa\nEst-RoBERTa model is a monolingual Estonian BERT-like model. It is closely related to French Camembert model URL The Estonian corpora used for training the model have 2.51 billion tokens in total. The subword vocabulary contains 40,000 tokens.\n\nEst-RoBERTa was trained for 40 epochs."
] | [
"TAGS\n#transformers #pytorch #camembert #fill-mask #et #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Usage\nLoad in transformers library with:",
"# Est-RoBERTa\nEst-RoBERTa model is a monolingual Estonian BERT-like model. It is closely related to French Camembert model URL The Estonian corpora used for training the model have 2.51 billion tokens in total. The subword vocabulary contains 40,000 tokens.\n\nEst-RoBERTa was trained for 40 epochs."
] |
fill-mask | transformers | # FinEst BERT
FinEst BERT is a trilingual model, using bert-base architecture, trained on Finnish, Estonian, and English corpora. Focusing on three languages, the model performs better than [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased), while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't.
Evaluation is presented in our article:
```
@Inproceedings{ulcar-robnik2020finest,
author = "Ulčar, M. and Robnik-Šikonja, M.",
year = 2020,
title = "{FinEst BERT} and {CroSloEngual BERT}: less is more in multilingual models",
editor = "Sojka, P and Kopeček, I and Pala, K and Horák, A",
booktitle = "Text, Speech, and Dialogue {TSD 2020}",
series = "Lecture Notes in Computer Science",
volume = 12284,
publisher = "Springer",
url = "https://doi.org/10.1007/978-3-030-58323-1_11",
}
```
The preprint is available at [arxiv.org/abs/2006.07890](https://arxiv.org/abs/2006.07890). | {"language": ["fi", "et", "en", "multilingual"], "license": "cc-by-4.0"} | EMBEDDIA/finest-bert | null | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"fi",
"et",
"en",
"multilingual",
"arxiv:2006.07890",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2006.07890"
] | [
"fi",
"et",
"en",
"multilingual"
] | TAGS
#transformers #pytorch #jax #bert #fill-mask #fi #et #en #multilingual #arxiv-2006.07890 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
| # FinEst BERT
FinEst BERT is a trilingual model, using bert-base architecture, trained on Finnish, Estonian, and English corpora. Focusing on three languages, the model performs better than multilingual BERT, while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't.
Evaluation is presented in our article:
The preprint is available at URL | [
"# FinEst BERT\nFinEst BERT is a trilingual model, using bert-base architecture, trained on Finnish, Estonian, and English corpora. Focusing on three languages, the model performs better than multilingual BERT, while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't. \n\nEvaluation is presented in our article:\n\nThe preprint is available at URL"
] | [
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #fi #et #en #multilingual #arxiv-2006.07890 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# FinEst BERT\nFinEst BERT is a trilingual model, using bert-base architecture, trained on Finnish, Estonian, and English corpora. Focusing on three languages, the model performs better than multilingual BERT, while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't. \n\nEvaluation is presented in our article:\n\nThe preprint is available at URL"
] |
fill-mask | transformers |
# LitLat BERT
LitLat BERT is a trilingual model, using xlm-roberta-base architecture, trained on Lithuanian, Latvian, and English corpora. Focusing on three languages, the model performs better than [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased), while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't.
### Named entity recognition evaluation
We compare LitLat BERT with multilingual BERT (mBERT), XLM-RoBERTa (XLM-R) and monolingual Latvian BERT (LVBERT) (Znotins and Barzdins, 2020). The report the results as a macro F1 score of 3 named entity classes shared in all three datasets: person, location, organization.
Language | mBERT | XLM-R | LVBERT | LitLat
---|---|---|---|---
Latvian | 0.830 | 0.865 | 0.797 | **0.881**
Lithuanian | 0.797 | 0.817 | / | **0.850**
English | 0.939 | 0.937 | / | **0.943**
| {"language": ["lt", "lv", "en", "multilingual"], "license": "cc-by-sa-4.0"} | EMBEDDIA/litlat-bert | null | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"lt",
"lv",
"en",
"multilingual",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"lt",
"lv",
"en",
"multilingual"
] | TAGS
#transformers #pytorch #xlm-roberta #fill-mask #lt #lv #en #multilingual #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
| LitLat BERT
===========
LitLat BERT is a trilingual model, using xlm-roberta-base architecture, trained on Lithuanian, Latvian, and English corpora. Focusing on three languages, the model performs better than multilingual BERT, while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't.
### Named entity recognition evaluation
We compare LitLat BERT with multilingual BERT (mBERT), XLM-RoBERTa (XLM-R) and monolingual Latvian BERT (LVBERT) (Znotins and Barzdins, 2020). The report the results as a macro F1 score of 3 named entity classes shared in all three datasets: person, location, organization.
| [
"### Named entity recognition evaluation\n\n\nWe compare LitLat BERT with multilingual BERT (mBERT), XLM-RoBERTa (XLM-R) and monolingual Latvian BERT (LVBERT) (Znotins and Barzdins, 2020). The report the results as a macro F1 score of 3 named entity classes shared in all three datasets: person, location, organization."
] | [
"TAGS\n#transformers #pytorch #xlm-roberta #fill-mask #lt #lv #en #multilingual #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Named entity recognition evaluation\n\n\nWe compare LitLat BERT with multilingual BERT (mBERT), XLM-RoBERTa (XLM-R) and monolingual Latvian BERT (LVBERT) (Znotins and Barzdins, 2020). The report the results as a macro F1 score of 3 named entity classes shared in all three datasets: person, location, organization."
] |
fill-mask | transformers | # Usage
Load in transformers library with:
```
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("EMBEDDIA/sloberta")
model = AutoModelForMaskedLM.from_pretrained("EMBEDDIA/sloberta")
```
# SloBERTa
SloBERTa model is a monolingual Slovene BERT-like model. It is closely related to French Camembert model https://camembert-model.fr/. The corpora used for training the model have 3.47 billion tokens in total. The subword vocabulary contains 32,000 tokens. The scripts and programs used for data preparation and training the model are available on https://github.com/clarinsi/Slovene-BERT-Tool
SloBERTa was trained for 200,000 iterations or about 98 epochs.
## Corpora
The following corpora were used for training the model:
* Gigafida 2.0
* Kas 1.0
* Janes 1.0 (only Janes-news, Janes-forum, Janes-blog, Janes-wiki subcorpora)
* Slovenian parliamentary corpus siParl 2.0
* slWaC
| {"language": ["sl"], "license": "cc-by-sa-4.0"} | EMBEDDIA/sloberta | null | [
"transformers",
"pytorch",
"camembert",
"fill-mask",
"sl",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sl"
] | TAGS
#transformers #pytorch #camembert #fill-mask #sl #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| # Usage
Load in transformers library with:
# SloBERTa
SloBERTa model is a monolingual Slovene BERT-like model. It is closely related to French Camembert model URL The corpora used for training the model have 3.47 billion tokens in total. The subword vocabulary contains 32,000 tokens. The scripts and programs used for data preparation and training the model are available on URL
SloBERTa was trained for 200,000 iterations or about 98 epochs.
## Corpora
The following corpora were used for training the model:
* Gigafida 2.0
* Kas 1.0
* Janes 1.0 (only Janes-news, Janes-forum, Janes-blog, Janes-wiki subcorpora)
* Slovenian parliamentary corpus siParl 2.0
* slWaC
| [
"# Usage\nLoad in transformers library with:",
"# SloBERTa\nSloBERTa model is a monolingual Slovene BERT-like model. It is closely related to French Camembert model URL The corpora used for training the model have 3.47 billion tokens in total. The subword vocabulary contains 32,000 tokens. The scripts and programs used for data preparation and training the model are available on URL\n\nSloBERTa was trained for 200,000 iterations or about 98 epochs.",
"## Corpora\nThe following corpora were used for training the model:\n* Gigafida 2.0\n* Kas 1.0\n* Janes 1.0 (only Janes-news, Janes-forum, Janes-blog, Janes-wiki subcorpora)\n* Slovenian parliamentary corpus siParl 2.0\n* slWaC"
] | [
"TAGS\n#transformers #pytorch #camembert #fill-mask #sl #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Usage\nLoad in transformers library with:",
"# SloBERTa\nSloBERTa model is a monolingual Slovene BERT-like model. It is closely related to French Camembert model URL The corpora used for training the model have 3.47 billion tokens in total. The subword vocabulary contains 32,000 tokens. The scripts and programs used for data preparation and training the model are available on URL\n\nSloBERTa was trained for 200,000 iterations or about 98 epochs.",
"## Corpora\nThe following corpora were used for training the model:\n* Gigafida 2.0\n* Kas 1.0\n* Janes 1.0 (only Janes-news, Janes-forum, Janes-blog, Janes-wiki subcorpora)\n* Slovenian parliamentary corpus siParl 2.0\n* slWaC"
] |
fill-mask | transformers |
# bio-lm
## Model description
This model is a [RoBERTa base pre-trained model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang).
## Intended uses & limitations
#### How to use
The intended use of this model is to be fine-tuned for downstream tasks, token classification in particular.
To have a quick check of the model as-is in a fill-mask task:
```python
from transformers import pipeline, RobertaTokenizerFast
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
text = "Let us try this model to see if it <mask>."
fill_mask = pipeline(
"fill-mask",
model='EMBO/bio-lm',
tokenizer=tokenizer
)
fill_mask(text)
```
#### Limitations and bias
This model should be fine-tuned on a specifi task like token classification.
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained with a masked language modeling taskon the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang) wich includes 12Mio examples from abstracts and figure legends extracted from papers published in life sciences.
## Training procedure
The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Command: `python -m lm.train /data/json/oapmc_abstracts_figs/ MLM`
- Tokenizer vocab size: 50265
- Training data: EMBO/biolang MLM
- Training with: 12005390 examples
- Evaluating on: 36713 examples
- Epochs: 3.0
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- tensorboard run: lm-MLM-2021-01-27T15-17-43.113766
End of training:
```
trainset: 'loss': 0.8653350830078125
validation set: 'eval_loss': 0.8192330598831177, 'eval_recall': 0.8154601116513597
```
## Eval results
Eval on test set:
```
recall: 0.814471959728645
```
| {"language": ["english"], "tags": ["language model"], "datasets": ["EMBO/biolang"], "metrics": []} | EMBO/bio-lm | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"language model",
"dataset:EMBO/biolang",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"english"
] | TAGS
#transformers #pytorch #jax #roberta #fill-mask #language model #dataset-EMBO/biolang #autotrain_compatible #endpoints_compatible #region-us
|
# bio-lm
## Model description
This model is a RoBERTa base pre-trained model that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the BioLang dataset.
## Intended uses & limitations
#### How to use
The intended use of this model is to be fine-tuned for downstream tasks, token classification in particular.
To have a quick check of the model as-is in a fill-mask task:
#### Limitations and bias
This model should be fine-tuned on a specifi task like token classification.
The model must be used with the 'roberta-base' tokenizer.
## Training data
The model was trained with a masked language modeling taskon the BioLang dataset wich includes 12Mio examples from abstracts and figure legends extracted from papers published in life sciences.
## Training procedure
The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at URL
- Command: 'python -m URL /data/json/oapmc_abstracts_figs/ MLM'
- Tokenizer vocab size: 50265
- Training data: EMBO/biolang MLM
- Training with: 12005390 examples
- Evaluating on: 36713 examples
- Epochs: 3.0
- 'per_device_train_batch_size': 16
- 'per_device_eval_batch_size': 16
- 'learning_rate': 5e-05
- 'weight_decay': 0.0
- 'adam_beta1': 0.9
- 'adam_beta2': 0.999
- 'adam_epsilon': 1e-08
- 'max_grad_norm': 1.0
- tensorboard run: lm-MLM-2021-01-27T15-17-43.113766
End of training:
## Eval results
Eval on test set:
| [
"# bio-lm",
"## Model description\n\nThis model is a RoBERTa base pre-trained model that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the BioLang dataset.",
"## Intended uses & limitations",
"#### How to use\n\nThe intended use of this model is to be fine-tuned for downstream tasks, token classification in particular.\n\nTo have a quick check of the model as-is in a fill-mask task:",
"#### Limitations and bias\n\nThis model should be fine-tuned on a specifi task like token classification.\nThe model must be used with the 'roberta-base' tokenizer.",
"## Training data\n\nThe model was trained with a masked language modeling taskon the BioLang dataset wich includes 12Mio examples from abstracts and figure legends extracted from papers published in life sciences.",
"## Training procedure\n\nThe training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs.\n\nTraining code is available at URL\n\n- Command: 'python -m URL /data/json/oapmc_abstracts_figs/ MLM'\n- Tokenizer vocab size: 50265\n- Training data: EMBO/biolang MLM\n- Training with: 12005390 examples\n- Evaluating on: 36713 examples\n- Epochs: 3.0\n- 'per_device_train_batch_size': 16\n- 'per_device_eval_batch_size': 16\n- 'learning_rate': 5e-05\n- 'weight_decay': 0.0\n- 'adam_beta1': 0.9\n- 'adam_beta2': 0.999\n- 'adam_epsilon': 1e-08\n- 'max_grad_norm': 1.0\n- tensorboard run: lm-MLM-2021-01-27T15-17-43.113766\n\nEnd of training:",
"## Eval results\n\nEval on test set:"
] | [
"TAGS\n#transformers #pytorch #jax #roberta #fill-mask #language model #dataset-EMBO/biolang #autotrain_compatible #endpoints_compatible #region-us \n",
"# bio-lm",
"## Model description\n\nThis model is a RoBERTa base pre-trained model that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the BioLang dataset.",
"## Intended uses & limitations",
"#### How to use\n\nThe intended use of this model is to be fine-tuned for downstream tasks, token classification in particular.\n\nTo have a quick check of the model as-is in a fill-mask task:",
"#### Limitations and bias\n\nThis model should be fine-tuned on a specifi task like token classification.\nThe model must be used with the 'roberta-base' tokenizer.",
"## Training data\n\nThe model was trained with a masked language modeling taskon the BioLang dataset wich includes 12Mio examples from abstracts and figure legends extracted from papers published in life sciences.",
"## Training procedure\n\nThe training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs.\n\nTraining code is available at URL\n\n- Command: 'python -m URL /data/json/oapmc_abstracts_figs/ MLM'\n- Tokenizer vocab size: 50265\n- Training data: EMBO/biolang MLM\n- Training with: 12005390 examples\n- Evaluating on: 36713 examples\n- Epochs: 3.0\n- 'per_device_train_batch_size': 16\n- 'per_device_eval_batch_size': 16\n- 'learning_rate': 5e-05\n- 'weight_decay': 0.0\n- 'adam_beta1': 0.9\n- 'adam_beta2': 0.999\n- 'adam_epsilon': 1e-08\n- 'max_grad_norm': 1.0\n- tensorboard run: lm-MLM-2021-01-27T15-17-43.113766\n\nEnd of training:",
"## Eval results\n\nEval on test set:"
] |
token-classification | transformers |
# sd-ner
## Model description
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of English scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It was then fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `NER` configuration to perform Named Entity Recognition of bioentities.
## Intended uses & limitations
#### How to use
The intended use of this model is for Named Entity Recognition of biological entities used in SourceData annotations (https://sourcedata.embo.org), including small molecules, gene products (genes and proteins), subcellular components, cell line and cell types, organ and tissues, species as well as experimental methods.
To have a quick check of the model:
```python
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
example = """<s> F. Western blot of input and eluates of Upf1 domains purification in a Nmd4-HA strain. The band with the # might corresponds to a dimer of Upf1-CH, bands marked with a star correspond to residual signal with the anti-HA antibodies (Nmd4). Fragments in the eluate have a smaller size because the protein A part of the tag was removed by digestion with the TEV protease. G6PDH served as a loading control in the input samples </s>"""
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-ner')
ner = pipeline('ner', model, tokenizer=tokenizer)
res = ner(example)
for r in res:
print(r['word'], r['entity'])
```
#### Limitations and bias
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained for token classification using the [EMBO/sd-nlp dataset](https://huggingface.co/datasets/EMBO/sd-nlp) dataset which includes manually annotated examples.
## Training procedure
The training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Model fine-tuned: EMBO/bio-lm
- Tokenizer vocab size: 50265
- Training data: EMBO/sd-nlp
- Dataset configuration: NER
- Training with 48771 examples.
- Evaluating on 13801 examples.
- Training on 15 features: O, I-SMALL_MOLECULE, B-SMALL_MOLECULE, I-GENEPROD, B-GENEPROD, I-SUBCELLULAR, B-SUBCELLULAR, I-CELL, B-CELL, I-TISSUE, B-TISSUE, I-ORGANISM, B-ORGANISM, I-EXP_ASSAY, B-EXP_ASSAY
- Epochs: 0.6
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
## Eval results
Testing on 7178 examples of test set with `sklearn.metrics`:
```
precision recall f1-score support
CELL 0.69 0.81 0.74 5245
EXP_ASSAY 0.56 0.57 0.56 10067
GENEPROD 0.77 0.89 0.82 23587
ORGANISM 0.72 0.82 0.77 3623
SMALL_MOLECULE 0.70 0.80 0.75 6187
SUBCELLULAR 0.65 0.72 0.69 3700
TISSUE 0.62 0.73 0.67 3207
micro avg 0.70 0.79 0.74 55616
macro avg 0.67 0.77 0.72 55616
weighted avg 0.70 0.79 0.74 55616
{'test_loss': 0.1830928772687912, 'test_accuracy_score': 0.9334821000160841, 'test_precision': 0.6987463009514112, 'test_recall': 0.789682825086306, 'test_f1': 0.7414366506288511, 'test_runtime': 61.0547, 'test_samples_per_second': 117.567, 'test_steps_per_second': 1.851}
```
| {"language": ["english"], "license": "agpl-3.0", "tags": ["token classification"], "datasets": ["EMBO/sd-nlp"], "metrics": []} | EMBO/sd-ner | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"token-classification",
"token classification",
"dataset:EMBO/sd-nlp",
"license:agpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"english"
] | TAGS
#transformers #pytorch #jax #roberta #token-classification #token classification #dataset-EMBO/sd-nlp #license-agpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
|
# sd-ner
## Model description
This model is a RoBERTa base model that was further trained using a masked language modeling task on a compendium of English scientific textual examples from the life sciences using the BioLang dataset. It was then fine-tuned for token classification on the SourceData sd-nlp dataset with the 'NER' configuration to perform Named Entity Recognition of bioentities.
## Intended uses & limitations
#### How to use
The intended use of this model is for Named Entity Recognition of biological entities used in SourceData annotations (URL), including small molecules, gene products (genes and proteins), subcellular components, cell line and cell types, organ and tissues, species as well as experimental methods.
To have a quick check of the model:
#### Limitations and bias
The model must be used with the 'roberta-base' tokenizer.
## Training data
The model was trained for token classification using the EMBO/sd-nlp dataset dataset which includes manually annotated examples.
## Training procedure
The training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at URL
- Model fine-tuned: EMBO/bio-lm
- Tokenizer vocab size: 50265
- Training data: EMBO/sd-nlp
- Dataset configuration: NER
- Training with 48771 examples.
- Evaluating on 13801 examples.
- Training on 15 features: O, I-SMALL_MOLECULE, B-SMALL_MOLECULE, I-GENEPROD, B-GENEPROD, I-SUBCELLULAR, B-SUBCELLULAR, I-CELL, B-CELL, I-TISSUE, B-TISSUE, I-ORGANISM, B-ORGANISM, I-EXP_ASSAY, B-EXP_ASSAY
- Epochs: 0.6
- 'per_device_train_batch_size': 16
- 'per_device_eval_batch_size': 16
- 'learning_rate': 0.0001
- 'weight_decay': 0.0
- 'adam_beta1': 0.9
- 'adam_beta2': 0.999
- 'adam_epsilon': 1e-08
- 'max_grad_norm': 1.0
## Eval results
Testing on 7178 examples of test set with 'sklearn.metrics':
| [
"# sd-ner",
"## Model description\n\nThis model is a RoBERTa base model that was further trained using a masked language modeling task on a compendium of English scientific textual examples from the life sciences using the BioLang dataset. It was then fine-tuned for token classification on the SourceData sd-nlp dataset with the 'NER' configuration to perform Named Entity Recognition of bioentities.",
"## Intended uses & limitations",
"#### How to use\n\nThe intended use of this model is for Named Entity Recognition of biological entities used in SourceData annotations (URL), including small molecules, gene products (genes and proteins), subcellular components, cell line and cell types, organ and tissues, species as well as experimental methods.\n\nTo have a quick check of the model:",
"#### Limitations and bias\n\nThe model must be used with the 'roberta-base' tokenizer.",
"## Training data\n\nThe model was trained for token classification using the EMBO/sd-nlp dataset dataset which includes manually annotated examples.",
"## Training procedure\n\nThe training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs.\n\nTraining code is available at URL\n\n- Model fine-tuned: EMBO/bio-lm\n- Tokenizer vocab size: 50265\n- Training data: EMBO/sd-nlp\n- Dataset configuration: NER\n- Training with 48771 examples.\n- Evaluating on 13801 examples.\n- Training on 15 features: O, I-SMALL_MOLECULE, B-SMALL_MOLECULE, I-GENEPROD, B-GENEPROD, I-SUBCELLULAR, B-SUBCELLULAR, I-CELL, B-CELL, I-TISSUE, B-TISSUE, I-ORGANISM, B-ORGANISM, I-EXP_ASSAY, B-EXP_ASSAY\n- Epochs: 0.6\n- 'per_device_train_batch_size': 16\n- 'per_device_eval_batch_size': 16\n- 'learning_rate': 0.0001\n- 'weight_decay': 0.0\n- 'adam_beta1': 0.9\n- 'adam_beta2': 0.999\n- 'adam_epsilon': 1e-08\n- 'max_grad_norm': 1.0",
"## Eval results\n\nTesting on 7178 examples of test set with 'sklearn.metrics':"
] | [
"TAGS\n#transformers #pytorch #jax #roberta #token-classification #token classification #dataset-EMBO/sd-nlp #license-agpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# sd-ner",
"## Model description\n\nThis model is a RoBERTa base model that was further trained using a masked language modeling task on a compendium of English scientific textual examples from the life sciences using the BioLang dataset. It was then fine-tuned for token classification on the SourceData sd-nlp dataset with the 'NER' configuration to perform Named Entity Recognition of bioentities.",
"## Intended uses & limitations",
"#### How to use\n\nThe intended use of this model is for Named Entity Recognition of biological entities used in SourceData annotations (URL), including small molecules, gene products (genes and proteins), subcellular components, cell line and cell types, organ and tissues, species as well as experimental methods.\n\nTo have a quick check of the model:",
"#### Limitations and bias\n\nThe model must be used with the 'roberta-base' tokenizer.",
"## Training data\n\nThe model was trained for token classification using the EMBO/sd-nlp dataset dataset which includes manually annotated examples.",
"## Training procedure\n\nThe training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs.\n\nTraining code is available at URL\n\n- Model fine-tuned: EMBO/bio-lm\n- Tokenizer vocab size: 50265\n- Training data: EMBO/sd-nlp\n- Dataset configuration: NER\n- Training with 48771 examples.\n- Evaluating on 13801 examples.\n- Training on 15 features: O, I-SMALL_MOLECULE, B-SMALL_MOLECULE, I-GENEPROD, B-GENEPROD, I-SUBCELLULAR, B-SUBCELLULAR, I-CELL, B-CELL, I-TISSUE, B-TISSUE, I-ORGANISM, B-ORGANISM, I-EXP_ASSAY, B-EXP_ASSAY\n- Epochs: 0.6\n- 'per_device_train_batch_size': 16\n- 'per_device_eval_batch_size': 16\n- 'learning_rate': 0.0001\n- 'weight_decay': 0.0\n- 'adam_beta1': 0.9\n- 'adam_beta2': 0.999\n- 'adam_epsilon': 1e-08\n- 'max_grad_norm': 1.0",
"## Eval results\n\nTesting on 7178 examples of test set with 'sklearn.metrics':"
] |
token-classification | transformers |
# sd-panelization
## Model description
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It was then fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `PANELIZATION` task to perform 'parsing' or 'segmentation' of figure legends into fragments corresponding to sub-panels.
Figures are usually composite representations of results obtained with heterogeneous experimental approaches and systems. Breaking figures into panels allows identifying more coherent descriptions of individual scientific experiments.
## Intended uses & limitations
#### How to use
The intended use of this model is for 'parsing' figure legends into sub-fragments corresponding to individual panels as used in SourceData annotations (https://sourcedata.embo.org).
To have a quick check of the model:
```python
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
example = """Fig 4. a, Volume density of early (Avi) and late (Avd) autophagic vacuoles.a, Volume density of early (Avi) and late (Avd) autophagic vacuoles from four independent cultures. Examples of Avi and Avd are shown in b and c, respectively. Bars represent 0.4����m. d, Labelling density of cathepsin-D as estimated in two independent experiments. e, Labelling density of LAMP-1."""
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-panelization')
ner = pipeline('ner', model, tokenizer=tokenizer)
res = ner(example)
for r in res: print(r['word'], r['entity'])
```
#### Limitations and bias
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained for token classification using the [`EMBO/sd-nlp PANELIZATION`](https://huggingface.co/datasets/EMBO/sd-nlp) dataset which includes manually annotated examples.
## Training procedure
The training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Model fine-tuned: EMBO/bio-lm
- Tokenizer vocab size: 50265
- Training data: EMBO/sd-nlp
- Dataset configuration: PANELIZATION
- TTraining with 2175 examples.
- Evaluating on 622 examples.
- Training on 2 features: `O`, `B-PANEL_START`
- Epochs: 1.3
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
## Eval results
Testing on 1802 examples from test set with `sklearn.metrics`:
```
precision recall f1-score support
PANEL_START 0.89 0.95 0.92 5427
micro avg 0.89 0.95 0.92 5427
macro avg 0.89 0.95 0.92 5427
weighted avg 0.89 0.95 0.92 5427
```
| {"language": ["english"], "license": "agpl-3.0", "tags": ["token classification"], "datasets": ["EMBO/sd-nlp"], "metrics": []} | EMBO/sd-panelization | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"token-classification",
"dataset:EMBO/sd-nlp",
"license:agpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"english"
] | TAGS
#transformers #pytorch #jax #roberta #token-classification #dataset-EMBO/sd-nlp #license-agpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
|
# sd-panelization
## Model description
This model is a RoBERTa base model that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the BioLang dataset. It was then fine-tuned for token classification on the SourceData sd-nlp dataset with the 'PANELIZATION' task to perform 'parsing' or 'segmentation' of figure legends into fragments corresponding to sub-panels.
Figures are usually composite representations of results obtained with heterogeneous experimental approaches and systems. Breaking figures into panels allows identifying more coherent descriptions of individual scientific experiments.
## Intended uses & limitations
#### How to use
The intended use of this model is for 'parsing' figure legends into sub-fragments corresponding to individual panels as used in SourceData annotations (URL).
To have a quick check of the model:
#### Limitations and bias
The model must be used with the 'roberta-base' tokenizer.
## Training data
The model was trained for token classification using the 'EMBO/sd-nlp PANELIZATION' dataset which includes manually annotated examples.
## Training procedure
The training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at URL
- Model fine-tuned: EMBO/bio-lm
- Tokenizer vocab size: 50265
- Training data: EMBO/sd-nlp
- Dataset configuration: PANELIZATION
- TTraining with 2175 examples.
- Evaluating on 622 examples.
- Training on 2 features: 'O', 'B-PANEL_START'
- Epochs: 1.3
- 'per_device_train_batch_size': 16
- 'per_device_eval_batch_size': 16
- 'learning_rate': 0.0001
- 'weight_decay': 0.0
- 'adam_beta1': 0.9
- 'adam_beta2': 0.999
- 'adam_epsilon': 1e-08
- 'max_grad_norm': 1.0
## Eval results
Testing on 1802 examples from test set with 'sklearn.metrics':
| [
"# sd-panelization",
"## Model description\n\nThis model is a RoBERTa base model that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the BioLang dataset. It was then fine-tuned for token classification on the SourceData sd-nlp dataset with the 'PANELIZATION' task to perform 'parsing' or 'segmentation' of figure legends into fragments corresponding to sub-panels.\n\nFigures are usually composite representations of results obtained with heterogeneous experimental approaches and systems. Breaking figures into panels allows identifying more coherent descriptions of individual scientific experiments.",
"## Intended uses & limitations",
"#### How to use\n\nThe intended use of this model is for 'parsing' figure legends into sub-fragments corresponding to individual panels as used in SourceData annotations (URL). \n\nTo have a quick check of the model:",
"#### Limitations and bias\n\nThe model must be used with the 'roberta-base' tokenizer.",
"## Training data\n\nThe model was trained for token classification using the 'EMBO/sd-nlp PANELIZATION' dataset which includes manually annotated examples.",
"## Training procedure\n\nThe training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs.\n\nTraining code is available at URL\n\n- Model fine-tuned: EMBO/bio-lm\n- Tokenizer vocab size: 50265\n- Training data: EMBO/sd-nlp\n- Dataset configuration: PANELIZATION\n- TTraining with 2175 examples. \n- Evaluating on 622 examples. \n- Training on 2 features: 'O', 'B-PANEL_START'\n- Epochs: 1.3\n- 'per_device_train_batch_size': 16\n- 'per_device_eval_batch_size': 16\n- 'learning_rate': 0.0001\n- 'weight_decay': 0.0\n- 'adam_beta1': 0.9\n- 'adam_beta2': 0.999\n- 'adam_epsilon': 1e-08\n- 'max_grad_norm': 1.0",
"## Eval results\n\nTesting on 1802 examples from test set with 'sklearn.metrics':"
] | [
"TAGS\n#transformers #pytorch #jax #roberta #token-classification #dataset-EMBO/sd-nlp #license-agpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# sd-panelization",
"## Model description\n\nThis model is a RoBERTa base model that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the BioLang dataset. It was then fine-tuned for token classification on the SourceData sd-nlp dataset with the 'PANELIZATION' task to perform 'parsing' or 'segmentation' of figure legends into fragments corresponding to sub-panels.\n\nFigures are usually composite representations of results obtained with heterogeneous experimental approaches and systems. Breaking figures into panels allows identifying more coherent descriptions of individual scientific experiments.",
"## Intended uses & limitations",
"#### How to use\n\nThe intended use of this model is for 'parsing' figure legends into sub-fragments corresponding to individual panels as used in SourceData annotations (URL). \n\nTo have a quick check of the model:",
"#### Limitations and bias\n\nThe model must be used with the 'roberta-base' tokenizer.",
"## Training data\n\nThe model was trained for token classification using the 'EMBO/sd-nlp PANELIZATION' dataset which includes manually annotated examples.",
"## Training procedure\n\nThe training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs.\n\nTraining code is available at URL\n\n- Model fine-tuned: EMBO/bio-lm\n- Tokenizer vocab size: 50265\n- Training data: EMBO/sd-nlp\n- Dataset configuration: PANELIZATION\n- TTraining with 2175 examples. \n- Evaluating on 622 examples. \n- Training on 2 features: 'O', 'B-PANEL_START'\n- Epochs: 1.3\n- 'per_device_train_batch_size': 16\n- 'per_device_eval_batch_size': 16\n- 'learning_rate': 0.0001\n- 'weight_decay': 0.0\n- 'adam_beta1': 0.9\n- 'adam_beta2': 0.999\n- 'adam_epsilon': 1e-08\n- 'max_grad_norm': 1.0",
"## Eval results\n\nTesting on 1802 examples from test set with 'sklearn.metrics':"
] |
text-generation | transformers |
# Game of Thrones DialoGPT Model | {"tags": ["conversational"]} | ESPersonnel/DialoGPT-small-got | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Game of Thrones DialoGPT Model | [
"# Game of Thrones DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Game of Thrones DialoGPT Model"
] |