metadata
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- snli
metrics:
- rouge
model-index:
- name: t5-small-finetuned-contradiction
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: snli
type: snli
args: plain_text
metrics:
- name: Rouge1
type: rouge
value: 34.4237
t5-small-finetuned-contradiction
This model is a fine-tuned version of domenicrosati/t5-small-finetuned-contradiction on the snli dataset. It achieves the following results on the evaluation set:
- Loss: 2.0458
- Rouge1: 34.4237
- Rouge2: 14.5442
- Rougel: 32.5483
- Rougelsum: 32.5785
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
---|---|---|---|---|---|---|---|
1.8605 | 1.0 | 2863 | 2.0813 | 34.4597 | 14.5186 | 32.6909 | 32.7097 |
1.9209 | 2.0 | 5726 | 2.0721 | 34.3859 | 14.5733 | 32.5188 | 32.5524 |
1.9367 | 3.0 | 8589 | 2.0623 | 34.4192 | 14.455 | 32.581 | 32.5962 |
1.9539 | 4.0 | 11452 | 2.0565 | 34.5148 | 14.6131 | 32.6786 | 32.7174 |
1.9655 | 5.0 | 14315 | 2.0538 | 34.4393 | 14.6439 | 32.6344 | 32.6587 |
1.9683 | 6.0 | 17178 | 2.0493 | 34.7199 | 14.7763 | 32.8625 | 32.8782 |
1.9735 | 7.0 | 20041 | 2.0476 | 34.5366 | 14.6362 | 32.6939 | 32.7177 |
1.98 | 8.0 | 22904 | 2.0458 | 34.5 | 14.5695 | 32.6219 | 32.6478 |
Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1