CreeperStone72 commited on
Commit
16efe73
1 Parent(s): 6ef4d5c

End of training

Browse files
README.md CHANGED
@@ -3,26 +3,11 @@ license: apache-2.0
3
  base_model: facebook/wav2vec2-base
4
  tags:
5
  - generated_from_trainer
6
- datasets:
7
- - minds14
8
  metrics:
9
  - accuracy
10
  model-index:
11
  - name: mood_box
12
- results:
13
- - task:
14
- name: Audio Classification
15
- type: audio-classification
16
- dataset:
17
- name: minds14
18
- type: minds14
19
- config: en-US
20
- split: train
21
- args: en-US
22
- metrics:
23
- - name: Accuracy
24
- type: accuracy
25
- value: 0.061946902654867256
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -30,10 +15,10 @@ should probably proofread and complete it, then remove this comment. -->
30
 
31
  # mood_box
32
 
33
- This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 2.6496
36
- - Accuracy: 0.0619
37
 
38
  ## Model description
39
 
@@ -67,19 +52,21 @@ The following hyperparameters were used during training:
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
- | No log | 0.8 | 3 | 2.6435 | 0.0619 |
71
- | No log | 1.87 | 7 | 2.6416 | 0.0619 |
72
- | 2.6371 | 2.93 | 11 | 2.6472 | 0.0619 |
73
- | 2.6371 | 4.0 | 15 | 2.6464 | 0.0619 |
74
- | 2.6371 | 4.8 | 18 | 2.6460 | 0.0619 |
75
- | 2.6244 | 5.87 | 22 | 2.6479 | 0.0619 |
76
- | 2.6244 | 6.93 | 26 | 2.6492 | 0.0619 |
77
- | 2.6252 | 8.0 | 30 | 2.6496 | 0.0619 |
 
 
78
 
79
 
80
  ### Framework versions
81
 
82
- - Transformers 4.37.2
83
- - Pytorch 2.2.0+cu118
84
- - Datasets 2.17.0
85
  - Tokenizers 0.15.2
 
3
  base_model: facebook/wav2vec2-base
4
  tags:
5
  - generated_from_trainer
 
 
6
  metrics:
7
  - accuracy
8
  model-index:
9
  - name: mood_box
10
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
15
 
16
  # mood_box
17
 
18
+ This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 1.5115
21
+ - Accuracy: 0.3802
22
 
23
  ## Model description
24
 
 
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
54
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
55
+ | No log | 1.0 | 4 | 1.6030 | 0.2231 |
56
+ | No log | 2.0 | 8 | 1.5976 | 0.3223 |
57
+ | 1.6018 | 3.0 | 12 | 1.5936 | 0.2893 |
58
+ | 1.6018 | 4.0 | 16 | 1.5849 | 0.2810 |
59
+ | 1.5765 | 5.0 | 20 | 1.5733 | 0.3636 |
60
+ | 1.5765 | 6.0 | 24 | 1.5557 | 0.3884 |
61
+ | 1.5765 | 7.0 | 28 | 1.5360 | 0.3719 |
62
+ | 1.5323 | 8.0 | 32 | 1.5246 | 0.3554 |
63
+ | 1.5323 | 9.0 | 36 | 1.5152 | 0.3719 |
64
+ | 1.4909 | 10.0 | 40 | 1.5115 | 0.3802 |
65
 
66
 
67
  ### Framework versions
68
 
69
+ - Transformers 4.38.1
70
+ - Pytorch 2.1.0+cu121
71
+ - Datasets 2.17.1
72
  - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d2812b14e37f9be4848cacd8de9d7705fa2890f28d7201772f0873fa2621ff37
3
  size 378305452
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efb3fa7679491be722be1e87ad570ff23927798efd77fb879423009110f36a87
3
  size 378305452
runs/Feb22_09-04-01_f04cbc234aa5/events.out.tfevents.1708592649.f04cbc234aa5.3183.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:16af25135c4017c93b83e2cf144b6d5a12a975d4634bc1ea8b5ed58773f2a103
3
- size 9953
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26adc9c92c0b980e885110bd42a2662098317d38d9fcf16fb37adeca2f6d3c4b
3
+ size 10825