results / README.md
MubarakB's picture
End of training
c5b2d8b verified
|
raw
history blame
2.8 kB
metadata
license: apache-2.0
base_model: t5-small
tags:
  - generated_from_trainer
metrics:
  - bleu
model-index:
  - name: ft-t5-small-on-info-lg
    results: []

Visualize in Weights & Biases Visualize in Weights & Biases

ft-t5-small-on-info-lg

This model is a fine-tuned version of t5-small on the opus100 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5870
  • Bleu: 0.3242
  • Gen Len: 15.9841

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
No log 1.0 177 0.6138 0.2725 15.5828
No log 2.0 354 0.6061 0.2603 16.3376
0.6269 3.0 531 0.6008 0.2719 15.2102
0.6269 4.0 708 0.5975 0.2875 16.6847
0.6269 5.0 885 0.5946 0.2719 15.4013
0.598 6.0 1062 0.5927 0.2497 15.9427
0.598 7.0 1239 0.5908 0.2555 16.2675
0.598 8.0 1416 0.5899 0.2953 16.9936
0.5825 9.0 1593 0.5889 0.3467 17.2134
0.5825 10.0 1770 0.5881 0.3013 16.1242
0.5825 11.0 1947 0.5873 0.3261 15.551
0.5695 12.0 2124 0.5871 0.2874 15.3854
0.5695 13.0 2301 0.5868 0.2987 15.5446
0.5695 14.0 2478 0.5869 0.3124 15.9013
0.5618 15.0 2655 0.5870 0.3242 15.9841

Framework versions

  • Transformers 4.42.3
  • Pytorch 2.1.2
  • Datasets 2.20.0
  • Tokenizers 0.19.1