Edit model card

Whisper Base Hindi

This model is a fine-tuned version of openai/whisper-base on the mozilla-foundation/common_voice_16_0 hi dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4679
  • Wer: 28.6490

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.6425 6.01 500 0.7025 41.4477
0.3973 13.0 1000 0.5367 33.9692
0.3125 19.01 1500 0.4927 31.4458
0.2848 26.0 2000 0.4739 30.1037
0.2201 32.01 2500 0.4675 29.4859
0.2257 39.01 3000 0.4637 28.9933
0.1837 46.0 3500 0.4657 28.9140
0.1897 52.01 4000 0.4658 28.7450
0.1764 59.0 4500 0.4676 28.7178
0.1681 65.01 5000 0.4679 28.6490

Framework versions

  • Transformers 4.37.0.dev0
  • Pytorch 2.1.2+cu121
  • Datasets 2.16.2.dev0
  • Tokenizers 0.15.0
Downloads last month
3
Safetensors
Model size
72.6M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for arun100/whisper-base-hi-2

Finetuned
(357)
this model
Finetunes
1 model

Dataset used to train arun100/whisper-base-hi-2

Evaluation results