Edit model card

sentiment_pc_oversampler

This model is a fine-tuned version of ahmedrachid/FinancialBERT-Sentiment-Analysis on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3909
  • Accuracy: 0.9291
  • F1: 0.9288

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
No log 0.1134 50 0.5293 0.8154 0.8173
No log 0.2268 100 0.4512 0.8222 0.8224
No log 0.3401 150 0.4212 0.8356 0.8364
No log 0.4535 200 0.3978 0.8395 0.8400
No log 0.5669 250 0.3745 0.8631 0.8642
No log 0.6803 300 0.3593 0.8667 0.8675
No log 0.7937 350 0.3203 0.8821 0.8826
No log 0.9070 400 0.3130 0.8880 0.8889
No log 1.0204 450 0.3052 0.8903 0.8904
0.3514 1.1338 500 0.3216 0.8948 0.8954
0.3514 1.2472 550 0.3178 0.8979 0.8981
0.3514 1.3605 600 0.3366 0.8874 0.8877
0.3514 1.4739 650 0.3108 0.8951 0.8950
0.3514 1.5873 700 0.2551 0.9198 0.9200
0.3514 1.7007 750 0.3358 0.8911 0.8907
0.3514 1.8141 800 0.2812 0.9127 0.9125
0.3514 1.9274 850 0.2443 0.9240 0.9239
0.3514 2.0408 900 0.3059 0.9183 0.9182
0.3514 2.1542 950 0.3161 0.9155 0.9152
0.1587 2.2676 1000 0.2733 0.9237 0.9235
0.1587 2.3810 1050 0.3252 0.9141 0.9137
0.1587 2.4943 1100 0.3257 0.9141 0.9140
0.1587 2.6077 1150 0.2836 0.9254 0.9253
0.1587 2.7211 1200 0.3176 0.9166 0.9163
0.1587 2.8345 1250 0.3335 0.9232 0.9228
0.1587 2.9478 1300 0.3076 0.9257 0.9254
0.1587 3.0612 1350 0.3169 0.9269 0.9264
0.1587 3.1746 1400 0.3627 0.9240 0.9238
0.1587 3.2880 1450 0.4074 0.9127 0.9118
0.0731 3.4014 1500 0.3580 0.9251 0.9247
0.0731 3.5147 1550 0.3802 0.9240 0.9235
0.0731 3.6281 1600 0.3705 0.9257 0.9253
0.0731 3.7415 1650 0.3177 0.9362 0.9361
0.0731 3.8549 1700 0.3563 0.9314 0.9310
0.0731 3.9683 1750 0.4248 0.9158 0.9154
0.0731 4.0816 1800 0.3535 0.9314 0.9310
0.0731 4.1950 1850 0.3568 0.9308 0.9305
0.0731 4.3084 1900 0.4044 0.9266 0.9264
0.0731 4.4218 1950 0.3598 0.9331 0.9327
0.0358 4.5351 2000 0.3909 0.9291 0.9288
0.0358 4.6485 2050 0.3725 0.9325 0.9322
0.0358 4.7619 2100 0.3953 0.9305 0.9303
0.0358 4.8753 2150 0.3902 0.9305 0.9302
0.0358 4.9887 2200 0.3960 0.9286 0.9282

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
8
Safetensors
Model size
110M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kiatkock/sentiment_pc_oversampler

Finetuned
(8)
this model