--- license: llama3 library_name: peft tags: - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct model-index: - name: Llama3_ALL_BCE_translations_19_shuffled_special_tokens results: [] --- # Llama3_ALL_BCE_translations_19_shuffled_special_tokens This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4776 - F1 Macro 0.1: 0.0818 - F1 Macro 0.15: 0.0922 - F1 Macro 0.2: 0.1027 - F1 Macro 0.25: 0.1130 - F1 Macro 0.3: 0.1230 - F1 Macro 0.35: 0.1336 - F1 Macro 0.4: 0.1440 - F1 Macro 0.45: 0.1551 - F1 Macro 0.5: 0.1663 - F1 Macro 0.55: 0.1778 - F1 Macro 0.6: 0.1879 - F1 Macro 0.65: 0.1987 - F1 Macro 0.7: 0.2090 - F1 Macro 0.75: 0.2178 - F1 Macro 0.8: 0.2211 - F1 Macro 0.85: 0.2205 - F1 Macro 0.9: 0.2010 - F1 Macro 0.95: 0.1457 - Threshold 0: 0.65 - Threshold 1: 0.75 - Threshold 2: 0.7 - Threshold 3: 0.85 - Threshold 4: 0.8 - Threshold 5: 0.85 - Threshold 6: 0.8 - Threshold 7: 0.8 - Threshold 8: 0.85 - Threshold 9: 0.75 - Threshold 10: 0.85 - Threshold 11: 0.8 - Threshold 12: 0.85 - Threshold 13: 0.95 - Threshold 14: 0.85 - Threshold 15: 0.75 - Threshold 16: 0.85 - Threshold 17: 0.8 - Threshold 18: 0.9 - 0: 0.0619 - 1: 0.1388 - 2: 0.1978 - 3: 0.1328 - 4: 0.2961 - 5: 0.3489 - 6: 0.3179 - 7: 0.1268 - 8: 0.2043 - 9: 0.3668 - 10: 0.3216 - 11: 0.3669 - 12: 0.1276 - 13: 0.1205 - 14: 0.2264 - 15: 0.1576 - 16: 0.3078 - 17: 0.3722 - 18: 0.125 - Max F1: 0.2211 - Mean F1: 0.2273 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 2024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro 0.1 | F1 Macro 0.15 | F1 Macro 0.2 | F1 Macro 0.25 | F1 Macro 0.3 | F1 Macro 0.35 | F1 Macro 0.4 | F1 Macro 0.45 | F1 Macro 0.5 | F1 Macro 0.55 | F1 Macro 0.6 | F1 Macro 0.65 | F1 Macro 0.7 | F1 Macro 0.75 | F1 Macro 0.8 | F1 Macro 0.85 | F1 Macro 0.9 | F1 Macro 0.95 | Threshold 0 | Threshold 1 | Threshold 2 | Threshold 3 | Threshold 4 | Threshold 5 | Threshold 6 | Threshold 7 | Threshold 8 | Threshold 9 | Threshold 10 | Threshold 11 | Threshold 12 | Threshold 13 | Threshold 14 | Threshold 15 | Threshold 16 | Threshold 17 | Threshold 18 | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | Max F1 | Mean F1 | |:-------------:|:-----:|:-----:|:---------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:-------:| | 3.3824 | 1.0 | 5595 | 4.3847 | 0.0700 | 0.0761 | 0.0818 | 0.0877 | 0.0936 | 0.1000 | 0.1064 | 0.1134 | 0.1196 | 0.1265 | 0.1327 | 0.1381 | 0.1432 | 0.1483 | 0.1465 | 0.1417 | 0.1291 | 0.0836 | 0.65 | 0.9 | 0.85 | 0.9 | 0.75 | 0.6 | 0.8 | 0.75 | 0.9 | 0.9 | 0.9 | 0.85 | 0.9 | 0.0 | 0.85 | 0.75 | 0.6 | 0.6 | 0.9 | 0.0649 | 0.0879 | 0.1603 | 0.0899 | 0.2589 | 0.2876 | 0.2683 | 0.1036 | 0.1245 | 0.2856 | 0.2387 | 0.3033 | 0.0726 | 0.0 | 0.1779 | 0.1109 | 0.2192 | 0.2743 | 0.0641 | 0.1483 | 0.1680 | | 2.4859 | 2.0 | 11190 | 1.7537 | 0.0881 | 0.0994 | 0.1111 | 0.1210 | 0.1310 | 0.1401 | 0.1472 | 0.1541 | 0.1607 | 0.1676 | 0.1697 | 0.1731 | 0.1768 | 0.1761 | 0.1713 | 0.1575 | 0.1365 | 0.0927 | 0.55 | 0.7 | 0.85 | 0.8 | 0.4 | 0.35 | 0.95 | 0.75 | 0.7 | 0.85 | 0.8 | 0.65 | 0.8 | 0.95 | 0.8 | 0.7 | 0.85 | 0.6 | 0.75 | 0.0534 | 0.1241 | 0.1924 | 0.1020 | 0.2738 | 0.3163 | 0.3072 | 0.1109 | 0.1793 | 0.3414 | 0.2889 | 0.3332 | 0.0831 | 0.0870 | 0.2137 | 0.1305 | 0.2881 | 0.3396 | 0.1254 | 0.1768 | 0.2048 | | 1.7561 | 3.0 | 16785 | 1.4633 | 0.0840 | 0.0954 | 0.1062 | 0.1164 | 0.1271 | 0.1382 | 0.1485 | 0.1597 | 0.1713 | 0.1809 | 0.1895 | 0.1976 | 0.2056 | 0.2113 | 0.2115 | 0.1995 | 0.1805 | 0.1184 | 0.6 | 0.75 | 0.75 | 0.95 | 0.8 | 0.7 | 0.9 | 0.8 | 0.8 | 0.7 | 0.8 | 0.8 | 0.9 | 0.95 | 0.75 | 0.8 | 0.7 | 0.7 | 0.8 | 0.0581 | 0.1395 | 0.1946 | 0.1235 | 0.2818 | 0.3391 | 0.3151 | 0.1202 | 0.1997 | 0.3656 | 0.3056 | 0.3630 | 0.1340 | 0.1087 | 0.2272 | 0.1482 | 0.2953 | 0.3589 | 0.1233 | 0.2115 | 0.2211 | | 1.2709 | 4.0 | 22380 | 1.4776 | 0.0818 | 0.0922 | 0.1027 | 0.1130 | 0.1230 | 0.1336 | 0.1440 | 0.1551 | 0.1663 | 0.1778 | 0.1879 | 0.1987 | 0.2090 | 0.2178 | 0.2211 | 0.2205 | 0.2010 | 0.1457 | 0.65 | 0.75 | 0.7 | 0.85 | 0.8 | 0.85 | 0.8 | 0.8 | 0.85 | 0.75 | 0.85 | 0.8 | 0.85 | 0.95 | 0.85 | 0.75 | 0.85 | 0.8 | 0.9 | 0.0619 | 0.1388 | 0.1978 | 0.1328 | 0.2961 | 0.3489 | 0.3179 | 0.1268 | 0.2043 | 0.3668 | 0.3216 | 0.3669 | 0.1276 | 0.1205 | 0.2264 | 0.1576 | 0.3078 | 0.3722 | 0.125 | 0.2211 | 0.2273 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1