Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

whisper-large-v3-Kannada-Version1

This model is a fine-tuned version of openai/whisper-large-v3 on the fleurs dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1214
  • Wer: 41.3722

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-06
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • training_steps: 20000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.1966 6.0606 2000 0.1678 54.0036
0.1699 12.1212 4000 0.1455 48.0278
0.1607 18.1818 6000 0.1358 45.7829
0.1497 24.2424 8000 0.1304 43.5934
0.1413 30.3030 10000 0.1270 42.7713
0.146 36.3636 12000 0.1248 41.9730
0.1309 42.4242 14000 0.1233 41.6726
0.1339 48.4848 16000 0.1222 41.4987
0.1343 54.5455 18000 0.1218 41.5382
0.1267 60.6061 20000 0.1214 41.3722

Framework versions

  • PEFT 0.12.1.dev0
  • Transformers 4.45.0.dev0
  • Pytorch 2.4.1+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for khushi1234455687/whisper-large-v3-Kannada-Version1

Finetuned
this model

Dataset used to train khushi1234455687/whisper-large-v3-Kannada-Version1