aisuko commited on
Commit
b72b8dc
1 Parent(s): ded6c48

End of training

Browse files
Files changed (3) hide show
  1. README.md +13 -11
  2. model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -15,9 +15,9 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # ft-wav2vec2-with-minds-asr
17
 
18
- This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 3.9457
21
  - Wer: 1.0
22
 
23
  ## Model description
@@ -38,23 +38,25 @@ More information needed
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 1e-05
41
- - train_batch_size: 16
42
- - eval_batch_size: 16
43
  - seed: 42
44
- - gradient_accumulation_steps: 2
45
  - total_train_batch_size: 32
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
- - lr_scheduler_warmup_steps: 500
49
- - training_steps: 2000
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
- | Training Loss | Epoch | Step | Validation Loss | Wer |
55
- |:-------------:|:-----:|:----:|:---------------:|:---:|
56
- | 3.8743 | 400.0 | 1000 | 3.9457 | 1.0 |
57
- | 3.1116 | 800.0 | 2000 | 3.5981 | 1.0 |
 
 
58
 
59
 
60
  ### Framework versions
 
15
 
16
  # ft-wav2vec2-with-minds-asr
17
 
18
+ This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the PolyAI/minds14 dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 22.6428
21
  - Wer: 1.0
22
 
23
  ## Model description
 
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 1e-05
41
+ - train_batch_size: 8
42
+ - eval_batch_size: 8
43
  - seed: 42
44
+ - gradient_accumulation_steps: 4
45
  - total_train_batch_size: 32
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
+ - lr_scheduler_warmup_steps: 40
49
+ - training_steps: 80
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
55
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
56
+ | No log | 1.6 | 20 | 48.7869 | 2.6249 |
57
+ | 56.1386 | 3.2 | 40 | 31.6244 | 1.0221 |
58
+ | 32.5496 | 4.8 | 60 | 23.6570 | 1.0 |
59
+ | 32.5496 | 6.4 | 80 | 22.6428 | 1.0 |
60
 
61
 
62
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:acff6a113cbc1222b8bcea8c130b2047380199181a647d66f45d6b1fe5dea0df
3
  size 377611072
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23c58f3e72eb982bd852a7d71be4512a856d02a988c60b26ad6fc543d6f8d1af
3
  size 377611072
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6c1aaf4663ac43355721518fd5782a681fdd6c0f5c88160624fb442255724063
3
  size 4155
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa7c8680275ce1f71c9247c57dcda31273d668f65fc92692c9509baf19c70922
3
  size 4155