File size: 3,472 Bytes
fd38e01
 
af570b7
74d1d25
af570b7
 
 
 
 
 
 
fd38e01
 
af570b7
 
 
 
 
74d1d25
af570b7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
library_name: transformers
license: apache-2.0
base_model: MubarakB/Helsinki_lg_inf_en
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: Helsinki_sg_inf_en
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# Helsinki_sg_inf_en

This model is a fine-tuned version of [MubarakB/Helsinki_lg_inf_en](https://huggingface.co/MubarakB/Helsinki_lg_inf_en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2858
- Bleu: 5.572
- Gen Len: 9.2338

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30

### Training results

| Training Loss | Epoch | Step | Validation Loss | Bleu   | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log        | 1.0   | 173  | 0.6201          | 0.0669 | 6.4272  |
| No log        | 2.0   | 346  | 0.5838          | 0.0805 | 5.9299  |
| 0.6387        | 3.0   | 519  | 0.5608          | 0.0808 | 6.1499  |
| 0.6387        | 4.0   | 692  | 0.5425          | 0.1017 | 8.1706  |
| 0.6387        | 5.0   | 865  | 0.5253          | 0.1307 | 8.0857  |
| 0.5575        | 6.0   | 1038 | 0.5073          | 0.1321 | 9.5706  |
| 0.5575        | 7.0   | 1211 | 0.4894          | 0.1799 | 12.0178 |
| 0.5575        | 8.0   | 1384 | 0.4735          | 0.4182 | 14.6461 |
| 0.5174        | 9.0   | 1557 | 0.4570          | 0.6454 | 8.5702  |
| 0.5174        | 10.0  | 1730 | 0.4399          | 0.7703 | 10.7481 |
| 0.5174        | 11.0  | 1903 | 0.4245          | 1.0412 | 10.367  |
| 0.4806        | 12.0  | 2076 | 0.4091          | 0.9907 | 11.4105 |
| 0.4806        | 13.0  | 2249 | 0.3942          | 1.3776 | 10.6073 |
| 0.4806        | 14.0  | 2422 | 0.3828          | 1.682  | 9.6396  |
| 0.4459        | 15.0  | 2595 | 0.3686          | 1.9324 | 10.269  |
| 0.4459        | 16.0  | 2768 | 0.3588          | 2.1982 | 10.6911 |
| 0.4459        | 17.0  | 2941 | 0.3477          | 2.7115 | 10.718  |
| 0.4168        | 18.0  | 3114 | 0.3381          | 3.0707 | 9.9858  |
| 0.4168        | 19.0  | 3287 | 0.3296          | 3.327  | 9.5866  |
| 0.4168        | 20.0  | 3460 | 0.3212          | 3.6662 | 9.3844  |
| 0.3955        | 21.0  | 3633 | 0.3153          | 4.0484 | 9.2152  |
| 0.3955        | 22.0  | 3806 | 0.3087          | 4.5079 | 9.3263  |
| 0.3955        | 23.0  | 3979 | 0.3026          | 4.7791 | 9.8225  |
| 0.3756        | 24.0  | 4152 | 0.2987          | 5.0946 | 9.6704  |
| 0.3756        | 25.0  | 4325 | 0.2950          | 5.1408 | 9.2835  |
| 0.3756        | 26.0  | 4498 | 0.2917          | 5.3052 | 9.277   |
| 0.3633        | 27.0  | 4671 | 0.2891          | 5.4256 | 9.7289  |
| 0.3633        | 28.0  | 4844 | 0.2873          | 5.5373 | 9.5742  |
| 0.3563        | 29.0  | 5017 | 0.2861          | 5.5454 | 10.1187 |
| 0.3563        | 30.0  | 5190 | 0.2858          | 5.572  | 9.2338  |


### Framework versions

- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0