--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: final_V1-distilbert-text-classification-model results: [] --- # final_V1-distilbert-text-classification-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1232 - Accuracy: 0.9743 - F1: 0.8372 - Precision: 0.8341 - Recall: 0.8408 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 1.7299 | 0.11 | 50 | 1.8504 | 0.2773 | 0.0874 | 0.1995 | 0.1296 | | 0.7996 | 0.22 | 100 | 0.7195 | 0.8242 | 0.5075 | 0.5040 | 0.5175 | | 0.3009 | 0.33 | 150 | 0.4573 | 0.8991 | 0.6736 | 0.6652 | 0.6833 | | 0.2559 | 0.44 | 200 | 0.5667 | 0.8701 | 0.6478 | 0.6293 | 0.6709 | | 0.1636 | 0.55 | 250 | 0.4599 | 0.9040 | 0.6752 | 0.6662 | 0.6864 | | 0.1813 | 0.66 | 300 | 0.3651 | 0.9103 | 0.6823 | 0.8187 | 0.6881 | | 0.1695 | 0.76 | 350 | 0.3603 | 0.9114 | 0.6969 | 0.8090 | 0.7002 | | 0.128 | 0.87 | 400 | 0.3779 | 0.9191 | 0.7197 | 0.7975 | 0.7165 | | 0.0976 | 0.98 | 450 | 0.3244 | 0.9095 | 0.7308 | 0.7394 | 0.7275 | | 0.0867 | 1.09 | 500 | 0.1825 | 0.9617 | 0.8283 | 0.8278 | 0.8294 | | 0.059 | 1.2 | 550 | 0.1847 | 0.9614 | 0.8258 | 0.8234 | 0.8291 | | 0.0719 | 1.31 | 600 | 0.1783 | 0.9590 | 0.8273 | 0.8263 | 0.8287 | | 0.0504 | 1.42 | 650 | 0.1311 | 0.9702 | 0.8325 | 0.8337 | 0.8315 | | 0.0525 | 1.53 | 700 | 0.1531 | 0.9694 | 0.8325 | 0.8355 | 0.8295 | | 0.0193 | 1.64 | 750 | 0.1675 | 0.9666 | 0.8311 | 0.8291 | 0.8334 | | 0.081 | 1.75 | 800 | 0.1531 | 0.9705 | 0.8329 | 0.8306 | 0.8354 | | 0.0453 | 1.86 | 850 | 0.2261 | 0.9295 | 0.7451 | 0.8081 | 0.7321 | | 0.0401 | 1.97 | 900 | 0.2015 | 0.9601 | 0.8283 | 0.8263 | 0.8315 | | 0.0686 | 2.07 | 950 | 0.1674 | 0.9642 | 0.8291 | 0.8255 | 0.8337 | | 0.0353 | 2.18 | 1000 | 0.1639 | 0.9664 | 0.8303 | 0.8264 | 0.8350 | | 0.0345 | 2.29 | 1050 | 0.1830 | 0.9639 | 0.8295 | 0.8264 | 0.8335 | | 0.0212 | 2.4 | 1100 | 0.1978 | 0.9634 | 0.8291 | 0.8258 | 0.8336 | | 0.0028 | 2.51 | 1150 | 0.1864 | 0.9653 | 0.8305 | 0.8273 | 0.8344 | | 0.0023 | 2.62 | 1200 | 0.1906 | 0.9661 | 0.8309 | 0.8277 | 0.8348 | | 0.0076 | 2.73 | 1250 | 0.1826 | 0.9669 | 0.8307 | 0.8278 | 0.8341 | | 0.0272 | 2.84 | 1300 | 0.1830 | 0.9666 | 0.8306 | 0.8283 | 0.8335 | | 0.0065 | 2.95 | 1350 | 0.1908 | 0.9661 | 0.8303 | 0.8278 | 0.8333 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2