File size: 3,069 Bytes
361f4f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f72f5af
361f4f3
3128428
361f4f3
1865db8
361f4f3
076d81a
37daef4
361f4f3
 
163eb69
361f4f3
 
 
 
 
 
 
3128428
 
361f4f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2e5002b
a825c09
 
d9b4b0b
361f4f3
bda656c
 
f72f5af
 
53f9cc7
f72f5af
 
 
 
 
 
 
c04196a
f72f5af
 
 
 
107250c
8328e1a
bda656c
 
 
 
 
 
 
824fe3c
 
 
 
1a7b72f
824fe3c
f72f5af
1670723
bda656c
361f4f3
57d37ce
632b9c9
57d37ce
632b9c9
57d37ce
14d40fa
f72f5af
1a7b72f
14d40fa
 
a7c3711
bda656c
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
---
language: ary
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Moroccan Arabic dialect by Boumehdi
  results:
  - task: 
      name: Speech Recognition
      type: automatic-speech-recognition
    metrics:
       - name: Test WER
         type: wer
         value: 23.44
---
# Wav2Vec2-Large-XLSR-53-Moroccan-Darija

**wav2vec2-large-xlsr-53** fine-tuned on 10 hours (~70 speakers) of labeled Darija Audios

The vocabulary contains 3 additional phonetic units ڭ, ڤ and پ. For example: ڭال , ڤيديو , پودكاست

## Usage

The model can be used directly as follows:

```python
import librosa
import torch
from transformers import Wav2Vec2CTCTokenizer, Wav2Vec2ForCTC, Wav2Vec2Processor, TrainingArguments, Wav2Vec2FeatureExtractor, Trainer

tokenizer = Wav2Vec2CTCTokenizer("./vocab.json", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|")
processor = Wav2Vec2Processor.from_pretrained('boumehdi/wav2vec2-large-xlsr-moroccan-darija', tokenizer=tokenizer)
model=Wav2Vec2ForCTC.from_pretrained('boumehdi/wav2vec2-large-xlsr-moroccan-darija')


# load the audio data (use your own wav file here!)
input_audio, sr = librosa.load('file.wav', sr=16000)

# tokenize
input_values = processor(input_audio, return_tensors="pt", padding=True).input_values

# retrieve logits
logits = model(input_values).logits

tokens=torch.argmax(logits, axis=-1)

# decode using n-gram
transcription = tokenizer.batch_decode(tokens)

# print the output
print(transcription)
```

Here's the output: ڭالت ليا هاد السيد هادا ما كاينش بحالو 

## Evaluation & Previous works

====================================

-v3 (fine-tuned on 10 hours of audio + changed hyperparameters + discovered a huge mistake when using the letter 'ا' that has improved the WER dramatically)

**Wer**: 23.44

**Training Loss**: 15.96

**Validation Loss**: 33.92

The validation loss goes down as we add more data for training.

Further training to decrease the training Loss makes this model overfit a little bit.

====================================

-v2 (fine-tuned on 9 hours of audio + replaced أ and ى and إ with ا as it creates a lot of problems + tried to standardize the Moroccan Darija)

**Wer**: 44.30

**Training Loss**: 12.99

**Validation Loss**: 36.93

Validation Loss has decreased on this version which means that the model can more generalize for unknown data compared to the previous version.

The validation loss is still high also because the validation data contains words that have never been trained before. The solution is to add more data and more hours of training.

Further training to decrease the training Loss makes this model overfit a little bit.

====================================

-v1 (fine-tuned on 6 hours of audio)

**Wer**: 49.68

**Training Loss**: 9.88

**Validation Loss**: 45.24

====================================

## Future Work

I am currently working on improving this model.

email: [email protected]