nbaden commited on
Commit
8945a25
1 Parent(s): 06689af

End of training

Browse files
Files changed (1) hide show
  1. README.md +29 -3
README.md CHANGED
@@ -5,9 +5,24 @@ tags:
5
  - generated_from_trainer
6
  datasets:
7
  - common_voice_13_0
 
 
8
  model-index:
9
  - name: wav2vec2-large-xlsr-53-demo-colab
10
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -16,6 +31,9 @@ should probably proofread and complete it, then remove this comment. -->
16
  # wav2vec2-large-xlsr-53-demo-colab
17
 
18
  This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_13_0 dataset.
 
 
 
19
 
20
  ## Model description
21
 
@@ -41,12 +59,20 @@ The following hyperparameters were used during training:
41
  - gradient_accumulation_steps: 2
42
  - total_train_batch_size: 32
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
- - lr_scheduler_type: constant
45
- - lr_scheduler_warmup_steps: 500
46
  - num_epochs: 20
47
 
48
  ### Training results
49
 
 
 
 
 
 
 
 
 
50
 
51
 
52
  ### Framework versions
 
5
  - generated_from_trainer
6
  datasets:
7
  - common_voice_13_0
8
+ metrics:
9
+ - wer
10
  model-index:
11
  - name: wav2vec2-large-xlsr-53-demo-colab
12
+ results:
13
+ - task:
14
+ name: Automatic Speech Recognition
15
+ type: automatic-speech-recognition
16
+ dataset:
17
+ name: common_voice_13_0
18
+ type: common_voice_13_0
19
+ config: sah
20
+ split: test
21
+ args: sah
22
+ metrics:
23
+ - name: Wer
24
+ type: wer
25
+ value: 0.8174979851347721
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
  # wav2vec2-large-xlsr-53-demo-colab
32
 
33
  This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_13_0 dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.9038
36
+ - Wer: 0.8175
37
 
38
  ## Model description
39
 
 
59
  - gradient_accumulation_steps: 2
60
  - total_train_batch_size: 32
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: linear
63
+ - lr_scheduler_warmup_steps: 50
64
  - num_epochs: 20
65
 
66
  ### Training results
67
 
68
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
69
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
70
+ | 3.0574 | 3.33 | 50 | 2.9873 | 1.0 |
71
+ | 2.8631 | 6.67 | 100 | 2.7208 | 1.0 |
72
+ | 2.0621 | 10.0 | 150 | 1.3181 | 1.0321 |
73
+ | 1.0109 | 13.33 | 200 | 0.9915 | 0.8863 |
74
+ | 0.6483 | 16.67 | 250 | 0.9177 | 0.8362 |
75
+ | 0.5217 | 20.0 | 300 | 0.9038 | 0.8175 |
76
 
77
 
78
  ### Framework versions