ndeclarke commited on
Commit
c07a01c
1 Parent(s): 5f7152a

End of training

Browse files
Files changed (2) hide show
  1. README.md +32 -32
  2. adapter.yor.safetensors +3 -0
README.md CHANGED
@@ -1,33 +1,33 @@
1
  ---
 
 
2
  base_model: facebook/mms-1b-all
 
 
3
  datasets:
4
  - common_voice_17_0
5
- library_name: transformers
6
- license: cc-by-nc-4.0
7
  metrics:
8
  - wer
9
  - bleu
10
- tags:
11
- - generated_from_trainer
12
  model-index:
13
  - name: wav2vec2-mms-1b-CV17.0
14
  results:
15
  - task:
16
- type: automatic-speech-recognition
17
  name: Automatic Speech Recognition
 
18
  dataset:
19
  name: common_voice_17_0
20
  type: common_voice_17_0
21
- config: ta
22
- split: test[:5%]+test[20%:25%]+test[60%:65%]+test[90%:]
23
- args: ta
24
  metrics:
25
- - type: wer
26
- value: 0.39458150446496143
27
- name: Wer
28
- - type: bleu
29
- value: 0.3894086227361947
30
- name: Bleu
31
  ---
32
 
33
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -37,10 +37,10 @@ should probably proofread and complete it, then remove this comment. -->
37
 
38
  This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the common_voice_17_0 dataset.
39
  It achieves the following results on the evaluation set:
40
- - Loss: 0.2560
41
- - Wer: 0.3946
42
- - Cer: 0.0667
43
- - Bleu: 0.3894
44
 
45
  ## Model description
46
 
@@ -59,7 +59,7 @@ More information needed
59
  ### Training hyperparameters
60
 
61
  The following hyperparameters were used during training:
62
- - learning_rate: 0.0001
63
  - train_batch_size: 16
64
  - eval_batch_size: 8
65
  - seed: 42
@@ -68,23 +68,23 @@ The following hyperparameters were used during training:
68
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
69
  - lr_scheduler_type: linear
70
  - lr_scheduler_warmup_ratio: 0.15
71
- - training_steps: 5000
72
  - mixed_precision_training: Native AMP
73
 
74
  ### Training results
75
 
76
- | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Bleu |
77
- |:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
78
- | 5.2362 | 0.3509 | 500 | 0.3039 | 0.4233 | 0.0716 | 0.3579 |
79
- | 0.2003 | 0.7018 | 1000 | 0.2726 | 0.4072 | 0.0694 | 0.3806 |
80
- | 0.1821 | 1.0526 | 1500 | 0.2632 | 0.3993 | 0.0677 | 0.3890 |
81
- | 0.1792 | 1.4035 | 2000 | 0.2602 | 0.4011 | 0.0677 | 0.3850 |
82
- | 0.1764 | 1.7544 | 2500 | 0.2572 | 0.3984 | 0.0674 | 0.3850 |
83
- | 0.1771 | 2.1053 | 3000 | 0.2570 | 0.3955 | 0.0672 | 0.3918 |
84
- | 0.1777 | 2.4561 | 3500 | 0.2562 | 0.3970 | 0.0670 | 0.3866 |
85
- | 0.1721 | 2.8070 | 4000 | 0.2547 | 0.3975 | 0.0669 | 0.3841 |
86
- | 0.177 | 3.1579 | 4500 | 0.2560 | 0.3949 | 0.0668 | 0.3910 |
87
- | 0.1765 | 3.5088 | 5000 | 0.2560 | 0.3946 | 0.0667 | 0.3894 |
88
 
89
 
90
  ### Framework versions
 
1
  ---
2
+ library_name: transformers
3
+ license: cc-by-nc-4.0
4
  base_model: facebook/mms-1b-all
5
+ tags:
6
+ - generated_from_trainer
7
  datasets:
8
  - common_voice_17_0
 
 
9
  metrics:
10
  - wer
11
  - bleu
 
 
12
  model-index:
13
  - name: wav2vec2-mms-1b-CV17.0
14
  results:
15
  - task:
 
16
  name: Automatic Speech Recognition
17
+ type: automatic-speech-recognition
18
  dataset:
19
  name: common_voice_17_0
20
  type: common_voice_17_0
21
+ config: yo
22
+ split: test
23
+ args: yo
24
  metrics:
25
+ - name: Wer
26
+ type: wer
27
+ value: 0.6538388264431321
28
+ - name: Bleu
29
+ type: bleu
30
+ value: 0.14202013774436864
31
  ---
32
 
33
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
37
 
38
  This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the common_voice_17_0 dataset.
39
  It achieves the following results on the evaluation set:
40
+ - Loss: 0.6919
41
+ - Wer: 0.6538
42
+ - Cer: 0.2510
43
+ - Bleu: 0.1420
44
 
45
  ## Model description
46
 
 
59
  ### Training hyperparameters
60
 
61
  The following hyperparameters were used during training:
62
+ - learning_rate: 0.001
63
  - train_batch_size: 16
64
  - eval_batch_size: 8
65
  - seed: 42
 
68
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
69
  - lr_scheduler_type: linear
70
  - lr_scheduler_warmup_ratio: 0.15
71
+ - training_steps: 2000
72
  - mixed_precision_training: Native AMP
73
 
74
  ### Training results
75
 
76
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Bleu |
77
+ |:-------------:|:-------:|:----:|:---------------:|:------:|:------:|:------:|
78
+ | 6.2938 | 3.0769 | 200 | 3.8350 | 0.9981 | 0.9092 | 0.0 |
79
+ | 2.0522 | 6.1538 | 400 | 0.7219 | 0.6997 | 0.2730 | 0.1116 |
80
+ | 0.7043 | 9.2308 | 600 | 0.7137 | 0.7419 | 0.2682 | 0.0933 |
81
+ | 0.6497 | 12.3077 | 800 | 0.6962 | 0.6664 | 0.2667 | 0.1318 |
82
+ | 0.614 | 15.3846 | 1000 | 0.6680 | 0.6586 | 0.2596 | 0.1356 |
83
+ | 0.5794 | 18.4615 | 1200 | 0.6798 | 0.6722 | 0.2599 | 0.1254 |
84
+ | 0.5439 | 21.5385 | 1400 | 0.6724 | 0.6665 | 0.2541 | 0.1287 |
85
+ | 0.5146 | 24.6154 | 1600 | 0.6906 | 0.6704 | 0.2513 | 0.1327 |
86
+ | 0.489 | 27.6923 | 1800 | 0.6886 | 0.6599 | 0.2509 | 0.1390 |
87
+ | 0.4668 | 30.7692 | 2000 | 0.6919 | 0.6538 | 0.2510 | 0.1420 |
88
 
89
 
90
  ### Framework versions
adapter.yor.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5295039797811034e2a43418c20047871eaf4e450bfc8c506413bab67a58c3e
3
+ size 8875400