Edit model card

ptt5-wikilingua-gptextsum

This model is a fine-tuned version of arthurmluz/ptt5-wikilingua-30epochs on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.1738
  • Rouge1: 0.1741
  • Rouge2: 0.0918
  • Rougel: 0.1451
  • Rougelsum: 0.1624
  • Gen Len: 19.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 70 2.3792 0.1569 0.0643 0.1218 0.1443 19.0
No log 2.0 140 2.2867 0.1786 0.0801 0.1329 0.162 19.0
2.4042 3.0 210 2.2380 0.1653 0.0682 0.1244 0.152 19.0
2.4042 4.0 280 2.2055 0.1708 0.0744 0.1302 0.1575 19.0
2.4042 5.0 350 2.1882 0.173 0.0804 0.139 0.1609 19.0
2.0557 6.0 420 2.1724 0.1779 0.0846 0.1385 0.1636 19.0
2.0557 7.0 490 2.1614 0.175 0.0841 0.1359 0.1634 19.0
2.0557 8.0 560 2.1537 0.1729 0.0823 0.1348 0.1596 19.0
1.866 9.0 630 2.1512 0.1743 0.0863 0.1387 0.1637 19.0
1.866 10.0 700 2.1464 0.1735 0.0866 0.1382 0.1608 19.0
1.866 11.0 770 2.1398 0.1748 0.0876 0.1383 0.1616 19.0
1.7225 12.0 840 2.1424 0.1752 0.0881 0.1426 0.1611 19.0
1.7225 13.0 910 2.1431 0.173 0.0865 0.1414 0.1592 19.0
1.7225 14.0 980 2.1481 0.1713 0.0865 0.1413 0.1585 19.0
1.606 15.0 1050 2.1430 0.1694 0.0834 0.1384 0.1557 19.0
1.606 16.0 1120 2.1501 0.1676 0.0841 0.1365 0.1538 19.0
1.606 17.0 1190 2.1526 0.1698 0.0866 0.1398 0.1561 19.0
1.5257 18.0 1260 2.1536 0.1749 0.0925 0.1441 0.1607 19.0
1.5257 19.0 1330 2.1564 0.1756 0.0945 0.1451 0.162 19.0
1.4673 20.0 1400 2.1594 0.1758 0.0949 0.1451 0.1616 19.0
1.4673 21.0 1470 2.1623 0.1762 0.0943 0.1466 0.1624 19.0
1.4673 22.0 1540 2.1644 0.1762 0.0943 0.147 0.1627 19.0
1.4108 23.0 1610 2.1672 0.1768 0.0955 0.1473 0.1635 19.0
1.4108 24.0 1680 2.1689 0.1751 0.0933 0.1459 0.1633 19.0
1.4108 25.0 1750 2.1694 0.1751 0.0933 0.1459 0.1633 19.0
1.384 26.0 1820 2.1701 0.1751 0.0933 0.1459 0.1633 19.0
1.384 27.0 1890 2.1723 0.1747 0.0937 0.1449 0.1626 19.0
1.384 28.0 1960 2.1737 0.1743 0.0925 0.1445 0.1623 19.0
1.3507 29.0 2030 2.1738 0.1741 0.0918 0.1451 0.1624 19.0
1.3507 30.0 2100 2.1738 0.1741 0.0918 0.1451 0.1624 19.0

Framework versions

  • Transformers 4.34.0
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.5
  • Tokenizers 0.14.1
Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for arthurmluz/ptt5-wikilingua-gptextsum