drmeeseeks
commited on
Commit
•
19bfa7d
1
Parent(s):
72d8acc
Update README.md
Browse files
README.md
CHANGED
@@ -37,18 +37,22 @@ It achieves the following results on the evaluation set:
|
|
37 |
|
38 |
## Model description
|
39 |
|
40 |
-
|
41 |
|
42 |
## Intended uses & limitations
|
43 |
|
44 |
-
|
|
|
|
|
45 |
|
46 |
## Training and evaluation data
|
47 |
|
48 |
-
|
49 |
|
50 |
## Training procedure
|
51 |
|
|
|
|
|
52 |
### Training hyperparameters
|
53 |
|
54 |
The following hyperparameters were used during training:
|
@@ -87,6 +91,19 @@ The following hyperparameters were used during training:
|
|
87 |
| 0.0001 | 1900.0 | 1900 | 6.8758 | 103.0822 |
|
88 |
| 0.0001 | 2000.0 | 2000 | 6.8839 | 103.0822 |
|
89 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
90 |
|
91 |
### Framework versions
|
92 |
|
@@ -94,3 +111,31 @@ The following hyperparameters were used during training:
|
|
94 |
- Pytorch 1.13.1+cu117
|
95 |
- Datasets 2.8.1.dev0
|
96 |
- Tokenizers 0.13.2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
## Model description
|
39 |
|
40 |
+
- The main Whisper Small Hugging Face page: [Hugging Face - Whisper Small](https://huggingface.co/openai/whisper-small)
|
41 |
|
42 |
## Intended uses & limitations
|
43 |
|
44 |
+
- For experimentation and curiosity.
|
45 |
+
- Based on the paper [AXRIV](https://arxiv.org/abs/2212.04356) and [Benchmarking OpenAI Whisper for non-English ASR - Dan Shafer](https://blog.deepgram.com/benchmarking-openai-whisper-for-non-english-asr/), there is a performance bias towards certain languages and curated datasets.
|
46 |
+
- From the Whisper paper, am_et is a low resource language (Table E), with the WER results ranging from 120-229, based on model size. Whisper small WER=120.2, indicating more training time may improve the fine tuning.
|
47 |
|
48 |
## Training and evaluation data
|
49 |
|
50 |
+
- This model was trained/evaluated on data from google/fleurs [google/fluers - HuggingFace Datasets](https://huggingface.co/datasets/google/fleurs).
|
51 |
|
52 |
## Training procedure
|
53 |
|
54 |
+
- The training was done in Lambda Cloud GPU on A100/40GB GPUs, which were provided by OpenAI Community Events [Whisper Fine Tuning Event - Dec 2022](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#fine-tune-whisper). The training was done using [HuggingFace Community Events - Whisper - run_speech_recognition_seq2seq_streaming.py](https://github.com/huggingface/community-events/blob/main/whisper-fine-tuning-event/run_speech_recognition_seq2seq_streaming.py) using the included [whisper_python_am_et.ipynb](https://huggingface.co/drmeeseeks/whisper-small-am_et/blob/main/am_et_fine_tune_whisper_streaming_colab_RUNNING-evalerrir.ipynb) to setup the Lambda Cloud GPU/Colab environment. For Colab, you must reduce the train batch size to the recommended amount mentioned at , as the T4 GPUs have 16GB of memory [Whisper Fine Tuning Event - Dec 2022](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#fine-tune-whisper). The notebook sets up the environment, logs into your huggingface account, and generates a bash script. The bash script generated in the IPYNB, `run.sh` was run from the terminal to train `bash run.sh`, as described on the Whisper community events GITHUB page.
|
55 |
+
|
56 |
### Training hyperparameters
|
57 |
|
58 |
The following hyperparameters were used during training:
|
|
|
91 |
| 0.0001 | 1900.0 | 1900 | 6.8758 | 103.0822 |
|
92 |
| 0.0001 | 2000.0 | 2000 | 6.8839 | 103.0822 |
|
93 |
|
94 |
+
### Recommendations
|
95 |
+
|
96 |
+
Limit training duration for smaller datasets to ~ 2000 to 3000 steps to avoid overfitting. 5000 steps using the [HuggingFace - Whisper Small](https://huggingface.co/openai/whisper-small) takes ~ 5hrs on A100 GPUs (1hr/1000 steps). Encountered `RuntimeError: The size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1` which is related to [Trainer RuntimeError](https://discuss.huggingface.co/t/trainer-runtimeerror-the-size-of-tensor-a-462-must-match-the-size-of-tensor-b-448-at-non-singleton-dimension-1/26010) as some languages datasets have input lengths that have non-standard lengths. The link did not resolve my issue, and appears elsewhere too [Training languagemodel – RuntimeError the expanded size of the tensor (100) must match the existing size (64) at non singleton dimension 1](https://hungsblog.de/en/technology/troubleshooting/training-languagemodel-runtimeerror-the-expanded-size-of-the-tensor-100-must-match-the-existing-size-64-at-non-singleton-dimension-1/). To circumvent this issue, `run.sh` paremeters are adjusted. Then run `python run_eval_whisper_streaming.py --model_id="openai/whisper-small" --dataset="google/fleurs" --config="am_et" --batch_size=32 --max_eval_samples=64 --device=0 --language="am"` to find the WER score manually. Otherwise, erroring out during evaluation prevents the trained model from loading to HugginFace. Based on the paper [AXRIV](https://arxiv.org/abs/2212.04356) and [Benchmarking OpenAI Whisper for non-English ASR - Dan Shafer](https://blog.deepgram.com/benchmarking-openai-whisper-for-non-english-asr/), there is a performance bias towards certain languages and curated datasets. The OpenAI fintuning community event provided ample _free_ GPU time to help develop the model further and improve WER scores.
|
97 |
+
|
98 |
+
### Environmental Impact
|
99 |
+
|
100 |
+
Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). In total roughly 100 hours were used primarily in US East/Asia Pacific (80%/20%), with AWS as the reference. Additional resources are available at [Our World in Data - CO2 Emissions](https://ourworldindata.org/co2-emissions)
|
101 |
+
|
102 |
+
- __Hardware Type__: AMD EPYC 7J13 64-Core Processor (30 core VM) 197GB RAM, with NVIDIA A100-SXM 40GB
|
103 |
+
- __Hours Used__: 100 hrs
|
104 |
+
- __Cloud Provider__: Lambda Cloud GPU
|
105 |
+
- __Compute Region__: US East/Asia Pacific
|
106 |
+
- __Carbon Emitted__: 12 kg (GPU) + 13 kg (CPU) = 25 kg (the weight of 3 gallons of water)
|
107 |
|
108 |
### Framework versions
|
109 |
|
|
|
111 |
- Pytorch 1.13.1+cu117
|
112 |
- Datasets 2.8.1.dev0
|
113 |
- Tokenizers 0.13.2
|
114 |
+
|
115 |
+
### Citation
|
116 |
+
|
117 |
+
- [Whisper - GITHUB](https://github.com/openai/whisper)
|
118 |
+
- [Whisper - OpenAI - BLOG](https://openai.com/blog/whisper/)
|
119 |
+
- [Model Card - HuggingFace Hub - GITHUB](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md)
|
120 |
+
|
121 |
+
```bibtex
|
122 |
+
@misc{https://doi.org/10.48550/arxiv.2212.04356,
|
123 |
+
doi = {10.48550/ARXIV.2212.04356},
|
124 |
+
url = {https://arxiv.org/abs/2212.04356},
|
125 |
+
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
|
126 |
+
keywords = {Audio and Speech Processing (eess.AS), Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Computer and information sciences, FOS: Computer and information sciences},
|
127 |
+
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
|
128 |
+
publisher = {arXiv},
|
129 |
+
year = {2022},
|
130 |
+
copyright = {arXiv.org perpetual, non-exclusive license}
|
131 |
+
}
|
132 |
+
|
133 |
+
@article{owidco2andothergreenhousegasemissions,
|
134 |
+
author = {Hannah Ritchie and Max Roser and Pablo Rosado},
|
135 |
+
title = {CO₂ and Greenhouse Gas Emissions},
|
136 |
+
journal = {Our World in Data},
|
137 |
+
year = {2020},
|
138 |
+
note = {https://ourworldindata.org/co2-and-other-greenhouse-gas-emissions}
|
139 |
+
}
|
140 |
+
|
141 |
+
```
|