---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: ft-t5-small-on-info-lg-v1
results: []
---
[](https://wandb.ai/asr-africa-research-team/huggingface/runs/ben1m3wk)
[](https://wandb.ai/asr-africa-research-team/huggingface/runs/ben1m3wk)
[](https://wandb.ai/asr-africa-research-team/huggingface/runs/ben1m3wk)
# ft-t5-small-on-info-lg-v1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the Luganda Proverbs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5949
- Bleu: 0.2654
- Gen Len: 15.9299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 177 | 0.5884 | 0.347 | 15.8567 |
| No log | 2.0 | 354 | 0.5872 | 0.3173 | 16.4809 |
| 0.5373 | 3.0 | 531 | 0.5877 | 0.3232 | 16.2389 |
| 0.5373 | 4.0 | 708 | 0.5878 | 0.3198 | 16.2803 |
| 0.5373 | 5.0 | 885 | 0.5886 | 0.2831 | 14.465 |
| 0.5202 | 6.0 | 1062 | 0.5889 | 0.3702 | 15.4076 |
| 0.5202 | 7.0 | 1239 | 0.5881 | 0.3095 | 16.2325 |
| 0.5202 | 8.0 | 1416 | 0.5880 | 0.2826 | 15.4045 |
| 0.5097 | 9.0 | 1593 | 0.5885 | 0.3264 | 15.9236 |
| 0.5097 | 10.0 | 1770 | 0.5886 | 0.3028 | 14.6592 |
| 0.5097 | 11.0 | 1947 | 0.5872 | 0.2958 | 15.0064 |
| 0.4995 | 12.0 | 2124 | 0.5882 | 0.22 | 14.8631 |
| 0.4995 | 13.0 | 2301 | 0.5876 | 0.2049 | 16.3726 |
| 0.4995 | 14.0 | 2478 | 0.5878 | 0.2635 | 15.8599 |
| 0.4921 | 15.0 | 2655 | 0.5892 | 0.3041 | 15.8248 |
| 0.4921 | 16.0 | 2832 | 0.5881 | 0.2746 | 15.8662 |
| 0.4838 | 17.0 | 3009 | 0.5909 | 0.2713 | 15.8121 |
| 0.4838 | 18.0 | 3186 | 0.5902 | 0.2587 | 15.4076 |
| 0.4838 | 19.0 | 3363 | 0.5917 | 0.3038 | 15.4331 |
| 0.4788 | 20.0 | 3540 | 0.5931 | 0.2837 | 15.3599 |
| 0.4788 | 21.0 | 3717 | 0.5930 | 0.306 | 15.0955 |
| 0.4788 | 22.0 | 3894 | 0.5940 | 0.2973 | 15.7229 |
| 0.4666 | 23.0 | 4071 | 0.5937 | 0.2458 | 16.0096 |
| 0.4666 | 24.0 | 4248 | 0.5936 | 0.2781 | 15.7389 |
| 0.4666 | 25.0 | 4425 | 0.5939 | 0.2998 | 15.5796 |
| 0.4643 | 26.0 | 4602 | 0.5943 | 0.2782 | 15.7293 |
| 0.4643 | 27.0 | 4779 | 0.5942 | 0.2685 | 15.5255 |
| 0.4643 | 28.0 | 4956 | 0.5948 | 0.2687 | 15.6815 |
| 0.4609 | 29.0 | 5133 | 0.5947 | 0.2658 | 15.9363 |
| 0.4609 | 30.0 | 5310 | 0.5949 | 0.2654 | 15.9299 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1