deberta-v3-large-sentiment
This model is a fine-tuned version of microsoft/deberta-v3-large on an tweet_eval dataset.
Model description
Test set results:
Model | Emotion | Hate | Irony | Offensive | Sentiment |
---|---|---|---|---|---|
deberta-v3-large | 86.3 | 61.3 | 87.1 | 86.4 | 73.9 |
BERTweet | 79.3 | - | 82.1 | 79.5 | 73.4 |
RoB-RT | 79.5 | 52.3 | 61.7 | 80.5 | 69.3 |
Intended uses & limitations
Classifying attributes of interest on tweeter like data.
Training and evaluation data
tweet_eval dataset.
Training procedure
Fine tuned and evaluated with run_glue.py
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 10.0
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
0.6362 | 0.18 | 100 | 0.5481 | 0.7197 |
0.4264 | 0.36 | 200 | 0.4550 | 0.8008 |
0.4174 | 0.53 | 300 | 0.4524 | 0.7868 |
0.4197 | 0.71 | 400 | 0.4586 | 0.7918 |
0.3819 | 0.89 | 500 | 0.4368 | 0.8078 |
0.3558 | 1.07 | 600 | 0.4525 | 0.8068 |
0.2982 | 1.24 | 700 | 0.4999 | 0.7928 |
0.2885 | 1.42 | 800 | 0.5129 | 0.8108 |
0.253 | 1.6 | 900 | 0.5873 | 0.8208 |
0.3354 | 1.78 | 1000 | 0.4244 | 0.8178 |
0.3083 | 1.95 | 1100 | 0.4853 | 0.8058 |
0.2301 | 2.13 | 1200 | 0.7209 | 0.8018 |
0.2167 | 2.31 | 1300 | 0.8090 | 0.7778 |
0.1863 | 2.49 | 1400 | 0.6812 | 0.8038 |
0.2181 | 2.66 | 1500 | 0.6958 | 0.8138 |
0.2159 | 2.84 | 1600 | 0.6315 | 0.8118 |
0.1828 | 3.02 | 1700 | 0.7173 | 0.8138 |
0.1287 | 3.2 | 1800 | 0.9081 | 0.8018 |
0.1711 | 3.37 | 1900 | 0.8858 | 0.8068 |
0.1598 | 3.55 | 2000 | 0.7878 | 0.8028 |
0.1467 | 3.73 | 2100 | 0.9003 | 0.7948 |
0.127 | 3.91 | 2200 | 0.9066 | 0.8048 |
0.1134 | 4.09 | 2300 | 0.9646 | 0.8118 |
0.1017 | 4.26 | 2400 | 0.9778 | 0.8048 |
0.085 | 4.44 | 2500 | 1.0529 | 0.8088 |
0.0996 | 4.62 | 2600 | 1.0082 | 0.8058 |
0.1054 | 4.8 | 2700 | 0.9698 | 0.8108 |
0.1375 | 4.97 | 2800 | 0.9334 | 0.8048 |
0.0487 | 5.15 | 2900 | 1.1273 | 0.8108 |
0.0611 | 5.33 | 3000 | 1.1528 | 0.8058 |
0.0668 | 5.51 | 3100 | 1.0148 | 0.8118 |
0.0582 | 5.68 | 3200 | 1.1333 | 0.8108 |
0.0869 | 5.86 | 3300 | 1.0607 | 0.8088 |
0.0623 | 6.04 | 3400 | 1.1880 | 0.8068 |
0.0317 | 6.22 | 3500 | 1.2836 | 0.8008 |
0.0546 | 6.39 | 3600 | 1.2148 | 0.8058 |
0.0486 | 6.57 | 3700 | 1.3348 | 0.8008 |
0.0332 | 6.75 | 3800 | 1.3734 | 0.8018 |
0.051 | 6.93 | 3900 | 1.2966 | 0.7978 |
0.0217 | 7.1 | 4000 | 1.3853 | 0.8048 |
0.0109 | 7.28 | 4100 | 1.4803 | 0.8068 |
0.0345 | 7.46 | 4200 | 1.4906 | 0.7998 |
0.0365 | 7.64 | 4300 | 1.4347 | 0.8028 |
0.0265 | 7.82 | 4400 | 1.3977 | 0.8128 |
0.0257 | 7.99 | 4500 | 1.3705 | 0.8108 |
0.0036 | 8.17 | 4600 | 1.4353 | 0.8168 |
0.0269 | 8.35 | 4700 | 1.4826 | 0.8068 |
0.0231 | 8.53 | 4800 | 1.4811 | 0.8118 |
0.0204 | 8.7 | 4900 | 1.5245 | 0.8028 |
0.0263 | 8.88 | 5000 | 1.5123 | 0.8018 |
0.0138 | 9.06 | 5100 | 1.5113 | 0.8028 |
0.0089 | 9.24 | 5200 | 1.5846 | 0.7978 |
0.029 | 9.41 | 5300 | 1.5362 | 0.8008 |
0.0058 | 9.59 | 5400 | 1.5759 | 0.8018 |
0.0084 | 9.77 | 5500 | 1.5679 | 0.8018 |
0.0065 | 9.95 | 5600 | 1.5683 | 0.8028 |
Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.9.0
- Datasets 2.2.2
- Tokenizers 0.11.6
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.