ThuyNT03 commited on
Commit
ae1db2b
1 Parent(s): 43628b1

End of training

Browse files
README.md CHANGED
@@ -17,8 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 1.0948
21
- - Accuracy: 0.6586
22
 
23
  ## Model description
24
 
@@ -37,7 +37,7 @@ More information needed
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
- - learning_rate: 0.0005
41
  - train_batch_size: 32
42
  - eval_batch_size: 32
43
  - seed: 42
@@ -45,38 +45,37 @@ The following hyperparameters were used during training:
45
  - total_train_batch_size: 128
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
- - lr_scheduler_warmup_ratio: 0.1
49
  - num_epochs: 20
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
54
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
55
- | 1.9295 | 0.97 | 28 | 1.4716 | 0.5330 |
56
- | 1.1643 | 1.98 | 57 | 1.2602 | 0.4924 |
57
- | 1.292 | 2.99 | 86 | 1.1352 | 0.5495 |
58
- | 1.0128 | 4.0 | 115 | 1.1409 | 0.5343 |
59
- | 1.1134 | 4.97 | 143 | 1.0386 | 0.5723 |
60
- | 1.002 | 5.98 | 172 | 1.0776 | 0.5812 |
61
- | 1.01 | 6.99 | 201 | 1.0220 | 0.6104 |
62
- | 0.9825 | 8.0 | 230 | 1.0120 | 0.6117 |
63
- | 0.981 | 8.97 | 258 | 0.9761 | 0.6117 |
64
- | 0.9431 | 9.98 | 287 | 0.9952 | 0.6003 |
65
- | 0.9459 | 10.99 | 316 | 0.9966 | 0.6091 |
66
- | 0.9011 | 12.0 | 345 | 1.0091 | 0.6218 |
67
- | 0.894 | 12.97 | 373 | 0.9507 | 0.6485 |
68
- | 0.7854 | 13.98 | 402 | 0.9384 | 0.6459 |
69
- | 0.791 | 14.99 | 431 | 0.9978 | 0.6320 |
70
- | 0.7095 | 16.0 | 460 | 1.0037 | 0.6409 |
71
- | 0.6826 | 16.97 | 488 | 1.0675 | 0.6447 |
72
- | 0.5798 | 17.98 | 517 | 1.0192 | 0.6688 |
73
- | 0.5759 | 18.99 | 546 | 1.0821 | 0.6536 |
74
- | 0.514 | 19.48 | 560 | 1.0948 | 0.6586 |
75
 
76
 
77
  ### Framework versions
78
 
79
- - Transformers 4.35.1
80
- - Pytorch 2.1.0+cu118
81
- - Datasets 2.14.6
82
- - Tokenizers 0.14.1
 
17
 
18
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 1.0701
21
+ - Accuracy: 0.6599
22
 
23
  ## Model description
24
 
 
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
+ - learning_rate: 5e-05
41
  - train_batch_size: 32
42
  - eval_batch_size: 32
43
  - seed: 42
 
45
  - total_train_batch_size: 128
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
 
48
  - num_epochs: 20
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
53
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
54
+ | No log | 0.97 | 28 | 1.2971 | 0.5876 |
55
+ | No log | 1.98 | 57 | 1.1261 | 0.6320 |
56
+ | No log | 2.99 | 86 | 1.0746 | 0.6104 |
57
+ | 1.156 | 4.0 | 115 | 1.0080 | 0.6396 |
58
+ | 1.156 | 4.97 | 143 | 0.9706 | 0.6650 |
59
+ | 1.156 | 5.98 | 172 | 0.9316 | 0.6815 |
60
+ | 1.156 | 6.99 | 201 | 0.9493 | 0.6662 |
61
+ | 0.834 | 8.0 | 230 | 0.9774 | 0.6612 |
62
+ | 0.834 | 8.97 | 258 | 0.9498 | 0.6548 |
63
+ | 0.834 | 9.98 | 287 | 1.0197 | 0.6472 |
64
+ | 0.834 | 10.99 | 316 | 0.9520 | 0.6637 |
65
+ | 0.641 | 12.0 | 345 | 0.9616 | 0.6650 |
66
+ | 0.641 | 12.97 | 373 | 1.0307 | 0.6523 |
67
+ | 0.641 | 13.98 | 402 | 1.0298 | 0.6675 |
68
+ | 0.641 | 14.99 | 431 | 1.0342 | 0.6574 |
69
+ | 0.4929 | 16.0 | 460 | 1.0520 | 0.6536 |
70
+ | 0.4929 | 16.97 | 488 | 1.0752 | 0.6472 |
71
+ | 0.4929 | 17.98 | 517 | 1.0643 | 0.6561 |
72
+ | 0.4929 | 18.99 | 546 | 1.0709 | 0.6612 |
73
+ | 0.4929 | 19.48 | 560 | 1.0701 | 0.6599 |
74
 
75
 
76
  ### Framework versions
77
 
78
+ - Transformers 4.35.2
79
+ - Pytorch 2.1.0+cu121
80
+ - Datasets 2.15.0
81
+ - Tokenizers 0.15.0
config.json CHANGED
@@ -131,7 +131,7 @@
131
  1
132
  ],
133
  "torch_dtype": "float32",
134
- "transformers_version": "4.35.1",
135
  "use_weighted_layer_sum": false,
136
  "vocab_size": 32,
137
  "xvector_output_dim": 512
 
131
  1
132
  ],
133
  "torch_dtype": "float32",
134
+ "transformers_version": "4.35.2",
135
  "use_weighted_layer_sum": false,
136
  "vocab_size": 32,
137
  "xvector_output_dim": 512
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d1d070b40f53a58e2ac0172b289c3b9ae25d5656e7316026af6a5fbd446619ec
3
  size 378308536
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67ffc2243bc04f566fec5f255e4f3b993c24369822250ac36ef6eda51f37a7fa
3
  size 378308536
runs/Dec16_07-09-19_6a0c0cfec2ac/events.out.tfevents.1702710560.6a0c0cfec2ac.905.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e25637f05c9bfa052a5d2d30f5223d847b2abdf07d5d66c7dd87d5b217dab612
3
+ size 6606
runs/Dec16_07-12-13_6a0c0cfec2ac/events.out.tfevents.1702710734.6a0c0cfec2ac.905.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32ae4ae40dced823f26604043ed73235b69c08c356ba0e061508b3c97ba6ff68
3
+ size 13704
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:66d5f84a655400157faa2307f822cca07a3901c265e296aef4f6057553e42afd
3
  size 4600
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e908453d6df83dc8687d6df9024563ccb378a31eb082dbc2b635e478abe59c50
3
  size 4600