File size: 3,804 Bytes
c5b2d8b 57d2281 c5b2d8b 57d2281 c5b2d8b 57d2281 c5b2d8b 57d2281 c5b2d8b 57d2281 c5b2d8b 57d2281 c5b2d8b 57d2281 c5b2d8b 57d2281 c5b2d8b 57d2281 c5b2d8b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
---
license: mit
base_model: MT-Informal-Languages/Helsinki-NLP-opus-mt-ug
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: Helsinki_lg_inf_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/asr-africa-research-team/huggingface/runs/0gisv7pm)
# Helsinki_lg_inf_en
This model is a fine-tuned version of [MT-Informal-Languages/Helsinki-NLP-opus-mt-ug](https://huggingface.co/MT-Informal-Languages/Helsinki-NLP-opus-mt-ug) on the Luganda Formal Data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0505
- Bleu: 57.3885
- Gen Len: 17.3595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 153 | 0.4708 | 0.9369 | 19.2244 |
| No log | 2.0 | 306 | 0.4227 | 1.1005 | 20.8706 |
| No log | 3.0 | 459 | 0.3854 | 1.4207 | 19.8702 |
| 1.1563 | 4.0 | 612 | 0.3519 | 1.7877 | 19.5442 |
| 1.1563 | 5.0 | 765 | 0.3216 | 2.4366 | 18.7977 |
| 1.1563 | 6.0 | 918 | 0.2929 | 3.0827 | 18.6413 |
| 0.375 | 7.0 | 1071 | 0.2677 | 3.9367 | 19.2035 |
| 0.375 | 8.0 | 1224 | 0.2427 | 5.605 | 18.5111 |
| 0.375 | 9.0 | 1377 | 0.2192 | 7.0359 | 18.6204 |
| 0.2959 | 10.0 | 1530 | 0.1980 | 9.5819 | 17.8284 |
| 0.2959 | 11.0 | 1683 | 0.1794 | 11.9364 | 17.7428 |
| 0.2959 | 12.0 | 1836 | 0.1621 | 13.9353 | 18.0643 |
| 0.2959 | 13.0 | 1989 | 0.1464 | 16.9189 | 17.8714 |
| 0.2334 | 14.0 | 2142 | 0.1315 | 19.2848 | 18.0201 |
| 0.2334 | 15.0 | 2295 | 0.1189 | 22.6041 | 17.973 |
| 0.2334 | 16.0 | 2448 | 0.1085 | 25.554 | 18.0324 |
| 0.1848 | 17.0 | 2601 | 0.0992 | 28.6049 | 17.4644 |
| 0.1848 | 18.0 | 2754 | 0.0905 | 31.9759 | 17.8104 |
| 0.1848 | 19.0 | 2907 | 0.0828 | 35.5846 | 17.8108 |
| 0.1507 | 20.0 | 3060 | 0.0764 | 39.748 | 17.656 |
| 0.1507 | 21.0 | 3213 | 0.0712 | 42.3511 | 17.5602 |
| 0.1507 | 22.0 | 3366 | 0.0665 | 45.7843 | 17.5238 |
| 0.1285 | 23.0 | 3519 | 0.0628 | 48.4047 | 17.5233 |
| 0.1285 | 24.0 | 3672 | 0.0592 | 50.5559 | 17.3403 |
| 0.1285 | 25.0 | 3825 | 0.0564 | 52.0378 | 17.4443 |
| 0.1285 | 26.0 | 3978 | 0.0545 | 54.0726 | 17.579 |
| 0.1132 | 27.0 | 4131 | 0.0526 | 55.201 | 17.4017 |
| 0.1132 | 28.0 | 4284 | 0.0515 | 56.6048 | 17.4447 |
| 0.1132 | 29.0 | 4437 | 0.0508 | 57.2182 | 17.448 |
| 0.1047 | 30.0 | 4590 | 0.0505 | 57.3885 | 17.3595 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
|