Krithika-p's picture
End of training
bc616db
|
raw
history blame
4.71 kB
metadata
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
  - generated_from_trainer
datasets:
  - audiofolder
metrics:
  - accuracy
model-index:
  - name: my_awesome_emotions_model
    results:
      - task:
          name: Audio Classification
          type: audio-classification
        dataset:
          name: audiofolder
          type: audiofolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.6086021505376344

my_awesome_emotions_model

This model is a fine-tuned version of facebook/wav2vec2-base on the audiofolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3229
  • Accuracy: 0.6086

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.7077 0.95 14 2.7049 0.0645
2.7036 1.97 29 2.6957 0.0688
2.6761 2.98 44 2.6471 0.1075
2.6414 4.0 59 2.5442 0.1398
2.5034 4.95 73 2.3988 0.2774
2.4337 5.97 88 2.2798 0.3097
2.2757 6.98 103 2.2054 0.3290
2.2346 8.0 118 2.1204 0.3484
2.1167 8.95 132 2.0784 0.3484
2.0801 9.97 147 2.0267 0.3484
1.987 10.98 162 1.9672 0.3699
1.9278 12.0 177 1.9686 0.3505
1.839 12.95 191 1.8604 0.4215
1.785 13.97 206 1.8611 0.4129
1.6811 14.98 221 1.7791 0.4473
1.6858 16.0 236 1.7533 0.4323
1.562 16.95 250 1.7364 0.4473
1.5469 17.97 265 1.7407 0.4430
1.485 18.98 280 1.7055 0.4301
1.4489 20.0 295 1.6566 0.4839
1.426 20.95 309 1.5844 0.5054
1.3596 21.97 324 1.6252 0.4796
1.3212 22.98 339 1.5797 0.4860
1.248 24.0 354 1.5483 0.5097
1.1954 24.95 368 1.5301 0.5290
1.1629 25.97 383 1.4905 0.5398
1.1364 26.98 398 1.5040 0.5355
1.0897 28.0 413 1.5128 0.5484
1.0564 28.95 427 1.4761 0.5570
1.0149 29.97 442 1.4948 0.5247
0.975 30.98 457 1.4194 0.5742
0.9546 32.0 472 1.3986 0.5763
0.9235 32.95 486 1.4126 0.5634
0.8848 33.97 501 1.4284 0.5763
0.8579 34.98 516 1.3872 0.5677
0.8475 36.0 531 1.4108 0.5742
0.8018 36.95 545 1.3667 0.5849
0.7861 37.97 560 1.3614 0.5914
0.7756 38.98 575 1.3473 0.5914
0.7427 40.0 590 1.3346 0.5914
0.764 40.95 604 1.3229 0.6086
0.7283 41.97 619 1.3206 0.6
0.714 42.98 634 1.3266 0.6065
0.724 44.0 649 1.3377 0.5957
0.6928 44.95 663 1.3281 0.5978
0.7065 45.97 678 1.3214 0.6086
0.6781 46.98 693 1.3338 0.5849
0.7058 47.46 700 1.3330 0.5892

Framework versions

  • Transformers 4.36.0
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0