mt5-summarize-full
This model is a fine-tuned version of lunarlist/mt5-summarize on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 2.8640
- Rouge1: 0.3352
- Rouge2: 0.1212
- Rougel: 0.2748
- Rougelsum: 0.4747
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 90
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
---|---|---|---|---|---|---|---|
4.0732 | 1.0667 | 100 | 3.1187 | 0.3331 | 0.1146 | 0.2648 | 0.5137 |
3.6546 | 2.1333 | 200 | 2.9872 | 0.3410 | 0.1256 | 0.2894 | 0.4943 |
3.3308 | 3.2 | 300 | 2.9373 | 0.3430 | 0.1278 | 0.2881 | 0.4743 |
3.276 | 4.2667 | 400 | 2.8782 | 0.3355 | 0.1163 | 0.2793 | 0.4801 |
3.1345 | 5.3333 | 500 | 2.9083 | 0.3354 | 0.1216 | 0.2835 | 0.4758 |
3.0736 | 6.4 | 600 | 2.8588 | 0.3531 | 0.1353 | 0.2900 | 0.4991 |
3.0168 | 7.4667 | 700 | 2.8592 | 0.3436 | 0.1229 | 0.2893 | 0.4863 |
2.969 | 8.5333 | 800 | 2.8739 | 0.3528 | 0.1297 | 0.2863 | 0.4968 |
2.9677 | 9.6 | 900 | 2.8640 | 0.3352 | 0.1212 | 0.2748 | 0.4747 |
Framework versions
- Transformers 4.42.3
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.