Edit model card

araT5-Base

This model is a fine-tuned version of UBC-NLP/AraT5v2-base-1024 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3080
  • Bleu: 19.9507
  • Rouge: 0.6204
  • Gen Len: 14.3392

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Bleu Rouge Gen Len
2.7135 1.0 7500 1.6843 15.9171 0.5533 14.33
1.6024 2.0 15000 1.4055 18.3573 0.5965 14.27
1.1542 3.0 22500 1.3082 19.3343 0.6112 14.3792
0.8608 4.0 30000 1.3080 19.9507 0.6204 14.3392
0.6687 5.0 37500 1.3430 20.2683 0.6234 14.3436

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
8
Safetensors
Model size
368M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for yasmineee/araT5-Base

Finetuned
(11)
this model