File size: 3,335 Bytes
361f4f3 200aeb6 361f4f3 3128428 361f4f3 200aeb6 37daef4 361f4f3 163eb69 361f4f3 3128428 361f4f3 2e5002b a825c09 44e4ab5 361f4f3 bda656c f72f5af b811d79 200aeb6 290187b 521553d 200aeb6 44a6107 b811d79 521553d 778bde6 f72f5af ab4344f f72f5af ab4344f f72f5af ab4344f f72f5af c04196a f72f5af 107250c 8328e1a bda656c 824fe3c 1a7b72f 824fe3c f72f5af 1670723 bda656c 361f4f3 57d37ce 632b9c9 57d37ce 632b9c9 57d37ce 14d40fa f72f5af 1a7b72f 14d40fa a7c3711 bda656c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 |
---
language: ary
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Moroccan Arabic dialect by Boumehdi
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test WER
type: wer
value: 0.09
---
# Wav2Vec2-Large-XLSR-53-Moroccan-Darija
**wav2vec2-large-xlsr-53** fine-tuned on 120 hours of labeled Darija Audios
## Usage
The model can be used directly as follows:
```python
import librosa
import torch
from transformers import Wav2Vec2CTCTokenizer, Wav2Vec2ForCTC, Wav2Vec2Processor, TrainingArguments, Wav2Vec2FeatureExtractor, Trainer
tokenizer = Wav2Vec2CTCTokenizer("./vocab.json", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|")
processor = Wav2Vec2Processor.from_pretrained('boumehdi/wav2vec2-large-xlsr-moroccan-darija', tokenizer=tokenizer)
model=Wav2Vec2ForCTC.from_pretrained('boumehdi/wav2vec2-large-xlsr-moroccan-darija')
# load the audio data (use your own wav file here!)
input_audio, sr = librosa.load('file.wav', sr=16000)
# tokenize
input_values = processor(input_audio, return_tensors="pt", padding=True).input_values
# retrieve logits
logits = model(input_values).logits
tokens=torch.argmax(logits, axis=-1)
# decode using n-gram
transcription = tokenizer.batch_decode(tokens)
# print the output
print(transcription)
```
Here's the output: قالت ليا هاد السيد هادا ما كاينش بحالو
## Evaluation & Previous works
====================================
-v5 working on it and might add some common French words (mais, alors, donc, ...)
It is going to be hard to see a great improvement from now on with my Nvidia GTX1070 ti :(, my VRAM is only 8Gb.
====================================
-v4
**Wer**: 0.09
**Training Loss**: 13.59
**Validation Loss**: 0.07
fixed a problem when dealing with أ and ى and إ
====================================
-v3 (fine-tuned on 11 hours of audio + changed hyperparameters + discovered a huge mistake when using the letter 'ا' that has improved the WER dramatically)
**Wer**: 22.86
**Training Loss**: 12.09
**Validation Loss**: 33.04
The validation loss goes down as we add more data for training.
Further training to decrease the training Loss makes this model overfit a little bit.
====================================
-v2 (fine-tuned on 9 hours of audio + replaced أ and ى and إ with ا as it creates a lot of problems + tried to standardize the Moroccan Darija)
**Wer**: 44.30
**Training Loss**: 12.99
**Validation Loss**: 36.93
Validation Loss has decreased on this version which means that the model can more generalize for unknown data compared to the previous version.
The validation loss is still high also because the validation data contains words that have never been trained before. The solution is to add more data and more hours of training.
Further training to decrease the training Loss makes this model overfit a little bit.
====================================
-v1 (fine-tuned on 6 hours of audio)
**Wer**: 49.68
**Training Loss**: 9.88
**Validation Loss**: 45.24
====================================
## Future Work
I am currently working on improving this model.
email: [email protected]
|