EzraWilliam commited on
Commit
e3633e4
1 Parent(s): f7447ad

End of training

Browse files
README.md CHANGED
@@ -1,8 +1,8 @@
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - generated_from_trainer
5
- base_model: facebook/wav2vec2-large-xlsr-53
6
  datasets:
7
  - common_voice_13_0
8
  metrics:
@@ -11,8 +11,8 @@ model-index:
11
  - name: wav2vec2-xlsr-53-CV-demo-google-colab-Ezra_William_Prod19
12
  results:
13
  - task:
14
- type: automatic-speech-recognition
15
  name: Automatic Speech Recognition
 
16
  dataset:
17
  name: common_voice_13_0
18
  type: common_voice_13_0
@@ -20,9 +20,9 @@ model-index:
20
  split: test
21
  args: id
22
  metrics:
23
- - type: wer
24
- value: 0.45612094395280234
25
- name: Wer
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,8 +32,8 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_13_0 dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.4521
36
- - Wer: 0.4561
37
 
38
  ## Model description
39
 
@@ -65,12 +65,12 @@ The following hyperparameters were used during training:
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Wer |
67
  |:-------------:|:-----:|:----:|:---------------:|:------:|
68
- | 2.9484 | 1.0 | 278 | 2.9258 | 1.0 |
69
- | 2.8765 | 2.0 | 556 | 2.8077 | 1.0 |
70
- | 1.3613 | 3.0 | 834 | 0.7209 | 0.6519 |
71
- | 0.7881 | 4.0 | 1112 | 0.5165 | 0.5055 |
72
- | 0.7009 | 5.0 | 1390 | 0.4753 | 0.4754 |
73
- | 0.603 | 6.0 | 1668 | 0.4521 | 0.4561 |
74
 
75
 
76
  ### Framework versions
 
1
  ---
2
  license: apache-2.0
3
+ base_model: facebook/wav2vec2-large-xlsr-53
4
  tags:
5
  - generated_from_trainer
 
6
  datasets:
7
  - common_voice_13_0
8
  metrics:
 
11
  - name: wav2vec2-xlsr-53-CV-demo-google-colab-Ezra_William_Prod19
12
  results:
13
  - task:
 
14
  name: Automatic Speech Recognition
15
+ type: automatic-speech-recognition
16
  dataset:
17
  name: common_voice_13_0
18
  type: common_voice_13_0
 
20
  split: test
21
  args: id
22
  metrics:
23
+ - name: Wer
24
+ type: wer
25
+ value: 0.4921183628318584
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_13_0 dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.4839
36
+ - Wer: 0.4921
37
 
38
  ## Model description
39
 
 
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Wer |
67
  |:-------------:|:-----:|:----:|:---------------:|:------:|
68
+ | 2.9477 | 1.0 | 278 | 2.9253 | 1.0 |
69
+ | 2.877 | 2.0 | 556 | 2.8306 | 1.0 |
70
+ | 1.6537 | 3.0 | 834 | 0.8538 | 0.7288 |
71
+ | 0.8469 | 4.0 | 1112 | 0.5611 | 0.5508 |
72
+ | 0.7422 | 5.0 | 1390 | 0.5133 | 0.5045 |
73
+ | 0.6389 | 6.0 | 1668 | 0.4839 | 0.4921 |
74
 
75
 
76
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:90040f72160489af4bf5081ad58eaf312631503e9349c70907817a3f08ca92d8
3
  size 1261991980
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91bdf5480c6429230ffe9522503913107cff10a55febd83ace9bd9404da4f946
3
  size 1261991980
runs/Apr29_14-01-29_c33a467ec0ec/events.out.tfevents.1714399400.c33a467ec0ec.427.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1e24ffe37a6c6df8e6884d575942a3ac6c6dc897d01fba5d01bab9c5bfe236cc
3
- size 11215
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9ee546c46590867899191a63a3c3bb6dc36562aa2fa5a159e63bbbe553c788b
3
+ size 12098