chrisjay commited on
Commit
4d194d2
1 Parent(s): a14186b

added updates

Browse files
Files changed (1) hide show
  1. README.md +14 -11
README.md CHANGED
@@ -25,15 +25,7 @@ model-index:
25
 
26
  # afrospeech-wav2vec-run
27
 
28
- This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the [crowd-speech-africa](https://huggingface.co/datasets/chrisjay/crowd-speech-africa), which was a crowd-sourced dataset collected using the [afro-speech Space](https://huggingface.co/spaces/chrisjay/afro-speech). It achieves the following results on the [validation set](VALID_rundi_run_audio_data.csv):
29
-
30
- - F1: 0.8
31
- - Accuracy: 0.8
32
-
33
- The confusion matrix below helps to give a better look at the model's performance across the digits. Through it, we can see the precision and recall of the model as well as other important insights.
34
-
35
- ![confusion matrix](afrospeech-wav2vec-run_confusion_matrix_VALID.png)
36
-
37
 
38
  ## Training and evaluation data
39
 
@@ -46,8 +38,19 @@ Below is a distribution of the dataset (training and valdation)
46
 
47
  ![digits-bar-plot-for-afrospeech](digits-bar-plot-for-afrospeech-wav2vec-run.png)
48
 
 
 
 
 
 
 
 
 
 
 
 
49
 
50
- ### Training hyperparameters
51
 
52
  The following hyperparameters were used during training:
53
  - learning_rate: 3e-05
@@ -56,7 +59,7 @@ The following hyperparameters were used during training:
56
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
  - num_epochs: 150
58
 
59
- ### Training results
60
 
61
  | Training Loss | Epoch | Validation Accuracy |
62
  |:-------------:|:-----:|:--------:|
 
25
 
26
  # afrospeech-wav2vec-run
27
 
28
+ This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the [crowd-speech-africa](https://huggingface.co/datasets/chrisjay/crowd-speech-africa), which was a crowd-sourced dataset collected using the [afro-speech Space](https://huggingface.co/spaces/chrisjay/afro-speech).
 
 
 
 
 
 
 
 
29
 
30
  ## Training and evaluation data
31
 
 
38
 
39
  ![digits-bar-plot-for-afrospeech](digits-bar-plot-for-afrospeech-wav2vec-run.png)
40
 
41
+ ## Evaluation performance
42
+ It achieves the following results on the [validation set](VALID_rundi_run_audio_data.csv):
43
+
44
+ - F1: 0.8
45
+ - Accuracy: 0.8
46
+
47
+ The confusion matrix below helps to give a better look at the model's performance across the digits. Through it, we can see the precision and recall of the model as well as other important insights.
48
+
49
+ ![confusion matrix](afrospeech-wav2vec-run_confusion_matrix_VALID.png)
50
+
51
+
52
 
53
+ ## Training hyperparameters
54
 
55
  The following hyperparameters were used during training:
56
  - learning_rate: 3e-05
 
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - num_epochs: 150
61
 
62
+ ## Training results
63
 
64
  | Training Loss | Epoch | Validation Accuracy |
65
  |:-------------:|:-----:|:--------:|