Edit model card

framing_classification_longformer_50

This model is a fine-tuned version of allenai/longformer-base-4096 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3739
  • Accuracy: 0.9332
  • F1: 0.9608
  • Precision: 0.9394
  • Recall: 0.9832

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall
0.8078 1.0 5152 0.8413 0.8323 0.9085 0.8323 1.0
0.7998 2.0 10304 0.8279 0.8323 0.9085 0.8323 1.0
0.9031 3.0 15456 0.9204 0.8323 0.9085 0.8323 1.0
0.7805 4.0 20608 0.8259 0.8323 0.9085 0.8323 1.0
0.8775 5.0 25760 0.8078 0.8323 0.9085 0.8323 1.0
0.7248 6.0 30912 0.7587 0.8323 0.9085 0.8323 1.0
0.8282 7.0 36064 0.7737 0.8323 0.9085 0.8323 1.0
0.774 8.0 41216 0.8283 0.8323 0.9085 0.8323 1.0
0.802 9.0 46368 0.7968 0.8323 0.9085 0.8323 1.0
0.8458 10.0 51520 0.8591 0.8323 0.9085 0.8323 1.0
0.7923 11.0 56672 0.8526 0.8323 0.9085 0.8323 1.0
0.8435 12.0 61824 0.8076 0.8323 0.9085 0.8323 1.0
0.8239 13.0 66976 0.8152 0.8323 0.9085 0.8323 1.0
0.7751 14.0 72128 0.8280 0.8323 0.9085 0.8323 1.0
0.7984 15.0 77280 0.8358 0.8323 0.9085 0.8323 1.0
0.8359 16.0 82432 0.8471 0.8323 0.9085 0.8323 1.0
0.9831 17.0 87584 0.8089 0.8323 0.9085 0.8323 1.0
0.9051 18.0 92736 0.8094 0.8323 0.9085 0.8323 1.0
0.9337 19.0 97888 0.8296 0.8323 0.9085 0.8323 1.0
0.9565 20.0 103040 0.8021 0.8323 0.9085 0.8323 1.0
0.8494 21.0 108192 0.8405 0.8323 0.9085 0.8323 1.0
0.822 22.0 113344 0.8481 0.8323 0.9085 0.8323 1.0
0.856 23.0 118496 0.8194 0.8323 0.9085 0.8323 1.0
0.8892 24.0 123648 0.8394 0.8323 0.9085 0.8323 1.0
0.7816 25.0 128800 0.7035 0.8649 0.9245 0.8639 0.9944
0.6349 26.0 133952 0.6452 0.8773 0.9309 0.8764 0.9925
0.6872 27.0 139104 0.6440 0.8820 0.9331 0.8833 0.9888
0.7452 28.0 144256 0.5578 0.8323 0.9085 0.8323 1.0
0.6425 29.0 149408 0.4712 0.8323 0.9085 0.8323 1.0
0.6705 30.0 154560 0.6447 0.8866 0.9357 0.8865 0.9907
0.5748 31.0 159712 0.4063 0.9239 0.9553 0.9340 0.9776
0.6543 32.0 164864 0.4753 0.9099 0.9482 0.9092 0.9907
0.5376 33.0 170016 0.4782 0.9099 0.9482 0.9092 0.9907
0.6895 34.0 175168 0.4383 0.9177 0.9524 0.9185 0.9888
0.5867 35.0 180320 0.4970 0.9130 0.9497 0.9152 0.9869
0.7092 36.0 185472 0.4719 0.9177 0.9521 0.9229 0.9832
0.6561 37.0 190624 0.4763 0.9146 0.9508 0.9139 0.9907
0.5693 38.0 195776 0.3947 0.9301 0.9591 0.9345 0.9851
0.4321 39.0 200928 0.4632 0.9161 0.9503 0.9382 0.9627
0.5156 40.0 206080 0.4012 0.9301 0.9593 0.9299 0.9907
0.5279 41.0 211232 0.4558 0.9224 0.9550 0.9219 0.9907
0.5489 42.0 216384 0.4438 0.9193 0.9532 0.9201 0.9888
0.5586 43.0 221536 0.4469 0.9177 0.9526 0.9157 0.9925
0.575 44.0 226688 0.4310 0.9270 0.9569 0.9405 0.9739
0.4589 45.0 231840 0.4117 0.9301 0.9591 0.9345 0.9851
0.4012 46.0 236992 0.4501 0.9239 0.9553 0.9356 0.9757
0.5395 47.0 242144 0.3989 0.9317 0.96 0.9362 0.9851
0.5009 48.0 247296 0.3739 0.9332 0.9608 0.9394 0.9832
0.5356 49.0 252448 0.3805 0.9348 0.9617 0.9395 0.9851
0.5729 50.0 257600 0.3833 0.9348 0.9617 0.9395 0.9851

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 2.0.1
  • Datasets 2.14.4
  • Tokenizers 0.13.3
Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for AriyanH22/framing_classification_longformer_50

Finetuned
(73)
this model