Edit model card

whisper-small-Kurdish-Sorani-10

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1166
  • Wer Ortho: 14.4007
  • Wer: 13.1989

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 5
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Ortho Wer
0.2207 0.0992 1000 0.2857 48.3163 44.5546
0.1701 0.1984 2000 0.2396 41.9819 38.2653
0.1551 0.2976 3000 0.2099 37.3690 33.9086
0.1213 0.3968 4000 0.1918 34.8926 31.6996
0.1205 0.4960 5000 0.1757 32.6973 29.4823
0.1126 0.5952 6000 0.1654 31.8523 28.7945
0.1229 0.6944 7000 0.1520 29.2376 26.4927
0.0966 0.7937 8000 0.1459 28.1116 25.5538
0.0805 0.8929 9000 0.1345 26.0589 23.7183
0.0829 0.9921 10000 0.1290 25.4676 23.3069
0.0503 1.0913 11000 0.1261 24.1885 21.9946
0.0363 1.1905 12000 0.1212 23.0877 21.0642
0.0562 1.2897 13000 0.1177 22.5090 20.7266
0.0382 1.3889 14000 0.1152 21.6053 19.8785
0.0457 1.4881 15000 0.1143 21.0224 19.4502
0.0394 1.5873 16000 0.1072 20.3892 18.8130
0.0427 1.6865 17000 0.1066 19.8482 18.2814
0.03 1.7857 18000 0.1033 19.0619 17.5957
0.0311 1.8849 19000 0.1018 18.7390 17.2391
0.0308 1.9841 20000 0.1004 18.8753 17.3172
0.0297 2.0833 21000 0.1034 18.1309 16.7623
0.0158 2.1825 22000 0.1052 18.5042 17.1463
0.0157 2.2817 23000 0.1039 17.8290 16.4374
0.0367 2.3810 24000 0.1022 18.0953 16.8129
0.0144 2.4802 25000 0.1041 17.3551 16.0724
0.01 2.5794 26000 0.1051 17.3132 15.9880
0.0116 2.6786 27000 0.1046 16.8561 15.4711
0.0149 2.7778 28000 0.1011 16.9861 15.5914
0.02 2.8770 29000 0.1008 16.4367 15.1357
0.0122 2.9762 30000 0.1002 16.1914 14.9352
0.004 3.0754 31000 0.1057 15.6861 14.3403
0.0055 3.1746 32000 0.1067 15.7783 14.4795
0.0045 3.2738 33000 0.1089 15.7133 14.3761
0.0084 3.3730 34000 0.1072 15.7196 14.4500
0.0046 3.4722 35000 0.1087 15.7825 14.4837
0.0032 3.5714 36000 0.1094 15.3757 14.1567
0.0085 3.6706 37000 0.1071 15.4303 14.1989
0.0064 3.7698 38000 0.1106 15.2688 14.0280
0.0037 3.8690 39000 0.1086 14.9836 13.7263
0.0123 3.9683 40000 0.1109 14.7886 13.5639
0.0021 4.0675 41000 0.1135 14.7362 13.4900
0.0017 4.1667 42000 0.1142 14.5685 13.3402
0.0019 4.2659 43000 0.1144 14.6964 13.4141
0.0013 4.3651 44000 0.1156 14.6796 13.4225
0.0051 4.4643 45000 0.1155 14.5769 13.3381
0.001 4.5635 46000 0.1162 14.4846 13.2727
0.0008 4.6627 47000 0.1170 14.5119 13.3086
0.0045 4.7619 48000 0.1149 14.6083 13.4098
0.0012 4.8611 49000 0.1164 14.3609 13.1672
0.0007 4.9603 50000 0.1166 14.4007 13.1989

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
17
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for roshna-omer/whisper-small-Kurdish-Sorani-10

Finetuned
(1844)
this model