eglkan1 commited on
Commit
bc0172e
1 Parent(s): 0b52481

End of training

Browse files
Files changed (1) hide show
  1. README.md +17 -17
README.md CHANGED
@@ -18,12 +18,12 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.0962
22
- - Rouge1: 0.76
23
- - Rouge2: 0.6246
24
- - Rougel: 0.7508
25
- - Sacrebleu: 53.9078
26
- - Gen Len: 32.9976
27
 
28
  ## Model description
29
 
@@ -43,11 +43,11 @@ More information needed
43
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 0.0001
46
- - train_batch_size: 2
47
- - eval_batch_size: 2
48
  - seed: 42
49
  - gradient_accumulation_steps: 2
50
- - total_train_batch_size: 4
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_steps: 500
@@ -57,14 +57,14 @@ The following hyperparameters were used during training:
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Sacrebleu | Gen Len |
59
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
60
- | 0.0639 | 1.0 | 418 | 0.0779 | 0.7012 | 0.5432 | 0.6904 | 43.0798 | 32.9976 |
61
- | 0.0653 | 2.0 | 836 | 0.0732 | 0.7197 | 0.5593 | 0.7091 | 44.8483 | 32.9976 |
62
- | 0.0327 | 3.0 | 1254 | 0.0726 | 0.7319 | 0.5787 | 0.7206 | 47.842 | 32.9976 |
63
- | 0.0168 | 4.0 | 1672 | 0.0782 | 0.7466 | 0.6031 | 0.7371 | 50.9225 | 32.9976 |
64
- | 0.013 | 5.0 | 2090 | 0.0804 | 0.7507 | 0.6077 | 0.7409 | 51.8293 | 32.9976 |
65
- | 0.0032 | 6.0 | 2508 | 0.0846 | 0.7606 | 0.6237 | 0.7507 | 53.5224 | 32.9976 |
66
- | 0.0012 | 7.0 | 2926 | 0.0911 | 0.7597 | 0.6263 | 0.751 | 54.0182 | 32.9976 |
67
- | 0.0012 | 8.0 | 3344 | 0.0962 | 0.76 | 0.6246 | 0.7508 | 53.9078 | 32.9976 |
68
 
69
 
70
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.0810
22
+ - Rouge1: 0.8026
23
+ - Rouge2: 0.6711
24
+ - Rougel: 0.796
25
+ - Sacrebleu: 55.7975
26
+ - Gen Len: 33.3938
27
 
28
  ## Model description
29
 
 
43
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 0.0001
46
+ - train_batch_size: 4
47
+ - eval_batch_size: 4
48
  - seed: 42
49
  - gradient_accumulation_steps: 2
50
+ - total_train_batch_size: 8
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_steps: 500
 
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Sacrebleu | Gen Len |
59
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
60
+ | 0.0978 | 1.0 | 209 | 0.0948 | 0.6734 | 0.5117 | 0.664 | 43.1877 | 33.3938 |
61
+ | 0.0617 | 2.0 | 418 | 0.0598 | 0.7702 | 0.6222 | 0.7609 | 49.5794 | 33.3938 |
62
+ | 1.1021 | 3.0 | 627 | 0.8158 | 0.0161 | 0.0 | 0.0162 | 0.0009 | 34.3938 |
63
+ | 0.0471 | 4.0 | 836 | 0.0822 | 0.6874 | 0.5335 | 0.6779 | 44.9573 | 33.3938 |
64
+ | 0.0276 | 5.0 | 1045 | 0.0664 | 0.7767 | 0.6339 | 0.7686 | 52.2135 | 33.3938 |
65
+ | 0.0162 | 6.0 | 1254 | 0.0756 | 0.7856 | 0.6452 | 0.7796 | 50.4352 | 33.3938 |
66
+ | 0.0069 | 7.0 | 1463 | 0.0796 | 0.7939 | 0.6586 | 0.7877 | 52.9489 | 33.3938 |
67
+ | 0.0051 | 8.0 | 1672 | 0.0810 | 0.8026 | 0.6711 | 0.796 | 55.7975 | 33.3938 |
68
 
69
 
70
  ### Framework versions