Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

whisper-large-v3-Hindi-Version1

This model is a fine-tuned version of openai/whisper-large-v3 on the fleurs dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1571
  • Wer: 18.1667

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-06
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • training_steps: 20000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.1799 6.7797 2000 0.1806 21.3881
0.1631 13.5593 4000 0.1678 20.0703
0.1436 20.3390 6000 0.1622 19.4748
0.145 27.1186 8000 0.1593 18.8403
0.1316 33.8983 10000 0.1578 18.5670
0.1293 40.6780 12000 0.1574 18.5182
0.1281 47.4576 14000 0.1570 18.4010
0.1258 54.2373 16000 0.1569 18.0594
0.1192 61.0169 18000 0.1571 18.4108
0.128 67.7966 20000 0.1571 18.1667

Framework versions

  • PEFT 0.12.1.dev0
  • Transformers 4.45.0.dev0
  • Pytorch 2.4.1+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for khushi1234455687/whisper-large-v3-Hindi-Version1

Finetuned
this model

Dataset used to train khushi1234455687/whisper-large-v3-Hindi-Version1