metadata
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- AdamCodd/emotion-balanced
metrics:
- accuracy
- f1
- recall
- precision
widget:
- text: Your actions were very caring.
example_title: Test sentence
base_model: distilbert-base-uncased
model-index:
- name: distilbert-base-uncased-finetuned-emotion-balanced
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- type: accuracy
value: 0.9521
name: Accuracy
- type: loss
value: 0.1216
name: Loss
- type: f1
value: 0.9520944952964783
name: F1
distilbert-emotion
This model is a fine-tuned version of distilbert-base-uncased on the emotion balanced dataset. It achieves the following results on the evaluation set:
- Loss: 0.1216
- Accuracy: 0.9521
Model description
This emotion classifier has been trained on 89_754 examples split into train, validation and test. Each label was perfectly balanced in each split.
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 1270
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- num_epochs: 1
- weight_decay: 0.01
Training results
precision recall f1-score support
sadness 0.9882 0.9485 0.9679 1496
joy 0.9956 0.9057 0.9485 1496
love 0.9256 0.9980 0.9604 1496
anger 0.9628 0.9519 0.9573 1496
fear 0.9348 0.9098 0.9221 1496
surprise 0.9160 0.9987 0.9555 1496
accuracy 0.9521 8976
macro avg 0.9538 0.9521 0.9520 8976
weighted avg 0.9538 0.9521 0.9520 8976
test_acc: 0.9520944952964783
test_loss: 0.121663898229599
Framework versions
- Transformers 4.33.1
- Pytorch lightning 2.0.8
- Tokenizers 0.13.3