Speech Recognition
Collection
7 items
•
Updated
This model is a fine-tuned version of imvladikon/whisper-medium-he on the None dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.0983 | 0.1 | 1000 | 0.3072 | 16.4362 |
0.1219 | 0.2 | 2000 | 0.2923 | 15.6642 |
0.134 | 0.3 | 3000 | 0.2345 | 13.7450 |
0.2113 | 0.39 | 4000 | 0.2061 | 13.4020 |
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="imvladikon/whisper-medium-he", device_map="auto") # requires `pip install accelerate`
print(recognize("sample.mp3"))
Prepared : https://huggingface.co/imvladikon/whisper-medium-he/blob/main/ggml-hebrew.bin
But if need to convert:
git clone https://github.com/openai/whisper
git clone https://github.com/ggerganov/whisper.cpp
git clone https://huggingface.co/imvladikon/whisper-medium-he
python3 ./whisper.cpp/models/convert-h5-to-ggml.py ./whisper-medium-he/ ./whisper .
Then possible to check (if produced model is ggml-model.bin
):
cd whisper.cpp && ./main -m ../ggml-model.bin -f ../sample.wav
Unable to build the model tree, the base model loops to the model itself. Learn more.