xiaoming-leza commited on
Commit
cff9c11
1 Parent(s): 37ce775

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -26
README.md CHANGED
@@ -1,11 +1,7 @@
1
  ---
2
- language:
3
- - tr
4
  license: apache-2.0
5
  base_model: facebook/wav2vec2-large-xlsr-53
6
  tags:
7
- - automatic-speech-recognition
8
- - common_voice
9
  - generated_from_trainer
10
  datasets:
11
  - common_voice
@@ -18,15 +14,15 @@ model-index:
18
  name: Automatic Speech Recognition
19
  type: automatic-speech-recognition
20
  dataset:
21
- name: COMMON_VOICE - TR
22
  type: common_voice
23
  config: tr
24
  split: test
25
- args: 'Config: tr, Training split: train+validation, Eval split: test'
26
  metrics:
27
  - name: Wer
28
  type: wer
29
- value: 0.34950464712491064
30
  ---
31
 
32
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -34,10 +30,10 @@ should probably proofread and complete it, then remove this comment. -->
34
 
35
  # wav2vec2-common_voice-tr-demo
36
 
37
- This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
38
  It achieves the following results on the evaluation set:
39
- - Loss: 0.3828
40
- - Wer: 0.3495
41
 
42
  ## Model description
43
 
@@ -71,22 +67,22 @@ The following hyperparameters were used during training:
71
 
72
  | Training Loss | Epoch | Step | Validation Loss | Wer |
73
  |:-------------:|:-----:|:----:|:---------------:|:------:|
74
- | No log | 0.92 | 100 | 3.5826 | 1.0 |
75
- | No log | 1.83 | 200 | 3.0218 | 0.9999 |
76
- | No log | 2.75 | 300 | 0.8985 | 0.8036 |
77
- | No log | 3.67 | 400 | 0.5992 | 0.6197 |
78
- | 3.1629 | 4.59 | 500 | 0.4968 | 0.5340 |
79
- | 3.1629 | 5.5 | 600 | 0.4646 | 0.5045 |
80
- | 3.1629 | 6.42 | 700 | 0.4316 | 0.4425 |
81
- | 3.1629 | 7.34 | 800 | 0.4500 | 0.4735 |
82
- | 3.1629 | 8.26 | 900 | 0.4114 | 0.4123 |
83
- | 0.2226 | 9.17 | 1000 | 0.4162 | 0.4019 |
84
- | 0.2226 | 10.09 | 1100 | 0.3999 | 0.3824 |
85
- | 0.2226 | 11.01 | 1200 | 0.4048 | 0.3842 |
86
- | 0.2226 | 11.93 | 1300 | 0.3789 | 0.3602 |
87
- | 0.2226 | 12.84 | 1400 | 0.4024 | 0.3536 |
88
- | 0.1015 | 13.76 | 1500 | 0.3899 | 0.3575 |
89
- | 0.1015 | 14.68 | 1600 | 0.3802 | 0.3490 |
90
 
91
 
92
  ### Framework versions
 
1
  ---
 
 
2
  license: apache-2.0
3
  base_model: facebook/wav2vec2-large-xlsr-53
4
  tags:
 
 
5
  - generated_from_trainer
6
  datasets:
7
  - common_voice
 
14
  name: Automatic Speech Recognition
15
  type: automatic-speech-recognition
16
  dataset:
17
+ name: common_voice
18
  type: common_voice
19
  config: tr
20
  split: test
21
+ args: tr
22
  metrics:
23
  - name: Wer
24
  type: wer
25
+ value: 0.3454192625880911
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
 
31
  # wav2vec2-common_voice-tr-demo
32
 
33
+ This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.3714
36
+ - Wer: 0.3454
37
 
38
  ## Model description
39
 
 
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Wer |
69
  |:-------------:|:-----:|:----:|:---------------:|:------:|
70
+ | No log | 0.92 | 100 | 3.5988 | 1.0 |
71
+ | No log | 1.83 | 200 | 3.0083 | 0.9999 |
72
+ | No log | 2.75 | 300 | 0.8642 | 0.7579 |
73
+ | No log | 3.67 | 400 | 0.5713 | 0.6203 |
74
+ | 3.14 | 4.59 | 500 | 0.4795 | 0.5338 |
75
+ | 3.14 | 5.5 | 600 | 0.4441 | 0.4912 |
76
+ | 3.14 | 6.42 | 700 | 0.4241 | 0.4521 |
77
+ | 3.14 | 7.34 | 800 | 0.4326 | 0.4611 |
78
+ | 3.14 | 8.26 | 900 | 0.3913 | 0.4212 |
79
+ | 0.2183 | 9.17 | 1000 | 0.4036 | 0.3973 |
80
+ | 0.2183 | 10.09 | 1100 | 0.4035 | 0.3959 |
81
+ | 0.2183 | 11.01 | 1200 | 0.3807 | 0.3790 |
82
+ | 0.2183 | 11.93 | 1300 | 0.3750 | 0.3650 |
83
+ | 0.2183 | 12.84 | 1400 | 0.3822 | 0.3573 |
84
+ | 0.1011 | 13.76 | 1500 | 0.3747 | 0.3510 |
85
+ | 0.1011 | 14.68 | 1600 | 0.3714 | 0.3454 |
86
 
87
 
88
  ### Framework versions