--- license: mit base_model: seddiktrk/xlm-roberta-base-finetuned-panx-39-langs tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-39-langs results: [] --- # xlm-roberta-base-finetuned-panx-39-langs This model is a fine-tuned version of [seddiktrk/xlm-roberta-base-finetuned-panx-39-langs](https://huggingface.co/seddiktrk/xlm-roberta-base-finetuned-panx-39-langs) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1650 - F1: 0.8856 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.1912 | 0.12 | 6303 | 0.2009 | 0.8458 | | 0.1901 | 0.24 | 12606 | 0.1971 | 0.8573 | | 0.1821 | 0.36 | 18909 | 0.1946 | 0.8612 | | 0.175 | 0.48 | 25212 | 0.1813 | 0.8671 | | 0.1688 | 0.6 | 31515 | 0.1770 | 0.8729 | | 0.1637 | 0.72 | 37818 | 0.1674 | 0.8781 | | 0.1536 | 0.84 | 44121 | 0.1665 | 0.8822 | | 0.146 | 0.96 | 50424 | 0.1650 | 0.8856 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.4.0 - Datasets 2.21.0 - Tokenizers 0.19.1