--- tags: - generated_from_trainer metrics: - accuracy - f1 - recall - precision model-index: - name: mpnet-base-News_About_Gold results: [] language: - en pipeline_tag: text-classification --- # mpnet-base-News_About_Gold This model is a fine-tuned version of [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base). It achieves the following results on the evaluation set: - Loss: 0.3098 - Accuracy: 0.9068 - Weighted f1: 0.9068 - Micro f1: 0.9068 - Macro f1: 0.8351 - Weighted recall: 0.9068 - Micro recall: 0.9068 - Macro recall: 0.8406 - Weighted precision: 0.9071 - Micro precision: 0.9068 - Macro precision: 0.8309 ## Model description For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Sentiment%20Analysis/Sentiment%20Analysis%20of%20Commodity%20News%20-%20Gold%20(Transformer%20Comparison)/News%20About%20Gold%20-%20Sentiment%20Analysis%20-%20MPNet-Base%20with%20W%26B.ipynb This project is part of a comparison of seven (7) transformers. Here is the README page for the comparison: https://github.com/DunnBC22/NLP_Projects/tree/main/Sentiment%20Analysis/Sentiment%20Analysis%20of%20Commodity%20News%20-%20Gold%20(Transformer%20Comparison) ## Intended uses & limitations This model is intended to demonstrate my ability to solve a complex problem using technology. ## Training and evaluation data Dataset Source: https://www.kaggle.com/datasets/ankurzing/sentiment-analysis-in-commodity-market-gold _Input Word Length:_ ![Length of Input Text (in Words)](https://github.com/DunnBC22/NLP_Projects/raw/main/Sentiment%20Analysis/Sentiment%20Analysis%20of%20Commodity%20News%20-%20Gold%20(Transformer%20Comparison)/Images/Input%20Word%20Length.png) _Class Distribution:_ ![Length of Input Text (in Words)](https://github.com/DunnBC22/NLP_Projects/raw/main/Sentiment%20Analysis/Sentiment%20Analysis%20of%20Commodity%20News%20-%20Gold%20(Transformer%20Comparison)/Images/Class%20Distribution.png) ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:| | 0.8316 | 1.0 | 133 | 0.5146 | 0.8742 | 0.8604 | 0.8742 | 0.6541 | 0.8742 | 0.8742 | 0.6583 | 0.8487 | 0.8742 | 0.6515 | | 0.4675 | 2.0 | 266 | 0.3833 | 0.8898 | 0.8857 | 0.8898 | 0.7813 | 0.8898 | 0.8898 | 0.7542 | 0.8862 | 0.8898 | 0.8298 | | 0.3276 | 3.0 | 399 | 0.3464 | 0.8997 | 0.8985 | 0.8997 | 0.8302 | 0.8997 | 0.8997 | 0.8212 | 0.8984 | 0.8997 | 0.8408 | | 0.2767 | 4.0 | 532 | 0.3098 | 0.9101 | 0.9103 | 0.9101 | 0.8412 | 0.9101 | 0.9101 | 0.8462 | 0.9106 | 0.9101 | 0.8367 | | 0.2429 | 5.0 | 665 | 0.3098 | 0.9068 | 0.9068 | 0.9068 | 0.8351 | 0.9068 | 0.9068 | 0.8406 | 0.9071 | 0.9068 | 0.8309 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.11.0 - Tokenizers 0.13.3