End of training
Browse files- README.md +4 -13
- adapter_model.safetensors +1 -1
- runs/Jan05_15-17-30_jupyter-abhinav-2eswami-40telusinternational-2ecom/events.out.tfevents.1704467852.jupyter-abhinav-2eswami-40telusinternational-2ecom.11062.1 +3 -0
- runs/Jan05_15-18-16_jupyter-abhinav-2eswami-40telusinternational-2ecom/events.out.tfevents.1704467896.jupyter-abhinav-2eswami-40telusinternational-2ecom.11529.0 +3 -0
- training_args.bin +1 -1
README.md
CHANGED
@@ -21,7 +21,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
21 |
|
22 |
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Common Voice 11.0 dataset.
|
23 |
It achieves the following results on the evaluation set:
|
24 |
-
- Loss: 0.
|
25 |
|
26 |
## Model description
|
27 |
|
@@ -46,24 +46,15 @@ The following hyperparameters were used during training:
|
|
46 |
- seed: 42
|
47 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
48 |
- lr_scheduler_type: linear
|
49 |
-
- lr_scheduler_warmup_steps:
|
50 |
-
- training_steps:
|
51 |
- mixed_precision_training: Native AMP
|
52 |
|
53 |
### Training results
|
54 |
|
55 |
| Training Loss | Epoch | Step | Validation Loss |
|
56 |
|:-------------:|:-----:|:----:|:---------------:|
|
57 |
-
|
|
58 |
-
| No log | 1.54 | 20 | 0.4690 |
|
59 |
-
| No log | 2.31 | 30 | 0.3444 |
|
60 |
-
| No log | 3.08 | 40 | 0.2747 |
|
61 |
-
| No log | 3.85 | 50 | 0.3133 |
|
62 |
-
| No log | 4.62 | 60 | 0.3181 |
|
63 |
-
| No log | 5.38 | 70 | 0.3010 |
|
64 |
-
| No log | 6.15 | 80 | 0.2810 |
|
65 |
-
| No log | 6.92 | 90 | 0.3127 |
|
66 |
-
| 0.2416 | 7.69 | 100 | 0.3031 |
|
67 |
|
68 |
|
69 |
### Framework versions
|
|
|
21 |
|
22 |
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Common Voice 11.0 dataset.
|
23 |
It achieves the following results on the evaluation set:
|
24 |
+
- Loss: 0.4629
|
25 |
|
26 |
## Model description
|
27 |
|
|
|
46 |
- seed: 42
|
47 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
48 |
- lr_scheduler_type: linear
|
49 |
+
- lr_scheduler_warmup_steps: 5
|
50 |
+
- training_steps: 10
|
51 |
- mixed_precision_training: Native AMP
|
52 |
|
53 |
### Training results
|
54 |
|
55 |
| Training Loss | Epoch | Step | Validation Loss |
|
56 |
|:-------------:|:-----:|:----:|:---------------:|
|
57 |
+
| 0.6243 | 0.77 | 10 | 0.4629 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
58 |
|
59 |
|
60 |
### Framework versions
|
adapter_model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 62969640
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:26123a5683210dd41a152735004fe1ecbe604416aa3b05646b64a7fa4ce4208b
|
3 |
size 62969640
|
runs/Jan05_15-17-30_jupyter-abhinav-2eswami-40telusinternational-2ecom/events.out.tfevents.1704467852.jupyter-abhinav-2eswami-40telusinternational-2ecom.11062.1
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4eb50d4abcf598547eff5e776c68c58785f116fcaf64325bc0ba97f012dcbb20
|
3 |
+
size 5609
|
runs/Jan05_15-18-16_jupyter-abhinav-2eswami-40telusinternational-2ecom/events.out.tfevents.1704467896.jupyter-abhinav-2eswami-40telusinternational-2ecom.11529.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:57823738c9381e28853c127c003049baecf77ae693d350f14f56d34c1b13bdcc
|
3 |
+
size 6377
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4475
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:405c3643c5bf4555b78416d1c36211bcad2ef24d2c5c32f3dccd0ccc902dfb57
|
3 |
size 4475
|