gigant commited on
Commit
685771b
1 Parent(s): e2b5a6b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -15
README.md CHANGED
@@ -7,25 +7,20 @@ tags:
7
  - generated_from_trainer
8
  datasets:
9
  - mozilla-foundation/common_voice_11_0
 
10
  model-index:
11
- - name: Whisper Small Romanian
12
  results: []
13
  ---
14
 
15
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
- should probably proofread and complete it, then remove this comment. -->
17
 
18
- # Whisper Small Romanian
19
-
20
- This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
21
  It achieves the following results on the evaluation set:
22
- - eval_loss: 0.2068
23
- - eval_wer: 15.7031
24
- - eval_runtime: 1070.693
25
- - eval_samples_per_second: 3.604
26
- - eval_steps_per_second: 0.451
27
- - epoch: 5.03
28
- - step: 1000
29
 
30
  ## Model description
31
 
@@ -45,8 +40,8 @@ More information needed
45
 
46
  The following hyperparameters were used during training:
47
  - learning_rate: 1e-05
48
- - train_batch_size: 64
49
- - eval_batch_size: 8
50
  - seed: 42
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
 
7
  - generated_from_trainer
8
  datasets:
9
  - mozilla-foundation/common_voice_11_0
10
+ - gigant/romanian_speech_synthesis_0_8_1
11
  model-index:
12
+ - name: Whisper Medium Romanian
13
  results: []
14
  ---
15
 
16
+ # Whisper Medium Romanian
 
17
 
18
+ This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 11.0 dataset, and the Romanian speech synthesis corpus.
 
 
19
  It achieves the following results on the evaluation set:
20
+ - eval_loss: 0.06453
21
+ - eval_wer: 4.717
22
+ - epoch: 7.03
23
+ - step: 3500
 
 
 
24
 
25
  ## Model description
26
 
 
40
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 1e-05
43
+ - train_batch_size: 32
44
+ - eval_batch_size: 32
45
  - seed: 42
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear