scideberta-cs-finetuned-ner
This model is a fine-tuned version of KISTI-AI/scideberta-cs on the generator dataset. It achieves the following results on the evaluation set:
- Loss: 0.8848
- Overall Precision: 0.5492
- Overall Recall: 0.6240
- Overall F1: 0.5842
- Overall Accuracy: 0.9552
- Datasetname F1: 0.4590
- Hyperparametername F1: 0.7273
- Hyperparametervalue F1: 0.7937
- Methodname F1: 0.6227
- Metricname F1: 0.7597
- Metricvalue F1: 0.6250
- Taskname F1: 0.4348
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
Training results
Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Datasetname F1 | Hyperparametername F1 | Hyperparametervalue F1 | Methodname F1 | Metricname F1 | Metricvalue F1 | Taskname F1 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
No log | 1.0 | 132 | 0.4127 | 0.3852 | 0.6646 | 0.4877 | 0.9411 | 0.3875 | 0.4690 | 0.6 | 0.6338 | 0.6438 | 0.5806 | 0.3670 |
No log | 2.0 | 264 | 0.3424 | 0.3447 | 0.6972 | 0.4613 | 0.9353 | 0.3204 | 0.4103 | 0.5600 | 0.5691 | 0.5848 | 0.7027 | 0.3594 |
No log | 3.0 | 396 | 0.3942 | 0.4767 | 0.6850 | 0.5621 | 0.9534 | 0.5385 | 0.6500 | 0.7429 | 0.6583 | 0.6437 | 0.6111 | 0.3830 |
0.4541 | 4.0 | 528 | 0.3542 | 0.4516 | 0.7012 | 0.5494 | 0.9503 | 0.4127 | 0.6552 | 0.5417 | 0.6068 | 0.6243 | 0.4762 | 0.4895 |
0.4541 | 5.0 | 660 | 0.4092 | 0.5076 | 0.6829 | 0.5823 | 0.9560 | 0.3857 | 0.5827 | 0.6933 | 0.6866 | 0.7465 | 0.6875 | 0.4865 |
0.4541 | 6.0 | 792 | 0.4450 | 0.4465 | 0.6870 | 0.5412 | 0.9491 | 0.3613 | 0.5985 | 0.6506 | 0.6278 | 0.6667 | 0.6857 | 0.4332 |
0.4541 | 7.0 | 924 | 0.4487 | 0.4985 | 0.6707 | 0.5719 | 0.9552 | 0.4407 | 0.6400 | 0.5789 | 0.6590 | 0.6980 | 0.7429 | 0.4667 |
0.1083 | 8.0 | 1056 | 0.4361 | 0.5068 | 0.6850 | 0.5825 | 0.9569 | 0.4553 | 0.6457 | 0.7429 | 0.6667 | 0.6887 | 0.6875 | 0.4536 |
0.1083 | 9.0 | 1188 | 0.5592 | 0.4954 | 0.6504 | 0.5624 | 0.9549 | 0.4538 | 0.6552 | 0.6753 | 0.6397 | 0.6581 | 0.7647 | 0.4118 |
0.1083 | 10.0 | 1320 | 0.5272 | 0.4686 | 0.6667 | 0.5503 | 0.9497 | 0.3816 | 0.6074 | 0.7 | 0.6340 | 0.7347 | 0.7429 | 0.3917 |
0.1083 | 11.0 | 1452 | 0.6108 | 0.5412 | 0.6809 | 0.6031 | 0.9562 | 0.4727 | 0.6724 | 0.7222 | 0.6615 | 0.7097 | 0.6857 | 0.5027 |
0.0491 | 12.0 | 1584 | 0.7836 | 0.5481 | 0.6138 | 0.5791 | 0.9546 | 0.5043 | 0.6446 | 0.7246 | 0.6286 | 0.7347 | 0.7273 | 0.4217 |
0.0491 | 13.0 | 1716 | 0.5258 | 0.4838 | 0.6667 | 0.5607 | 0.9527 | 0.4580 | 0.6299 | 0.6944 | 0.6234 | 0.7089 | 0.6667 | 0.4060 |
0.0491 | 14.0 | 1848 | 0.6477 | 0.5487 | 0.6301 | 0.5866 | 0.9576 | 0.4685 | 0.6909 | 0.7692 | 0.6312 | 0.6528 | 0.7273 | 0.4773 |
0.0491 | 15.0 | 1980 | 0.5891 | 0.5359 | 0.6972 | 0.6060 | 0.9577 | 0.4865 | 0.6777 | 0.7123 | 0.6667 | 0.7114 | 0.6875 | 0.4986 |
0.0288 | 16.0 | 2112 | 0.6913 | 0.5510 | 0.6809 | 0.6091 | 0.9575 | 0.5053 | 0.6783 | 0.7463 | 0.7063 | 0.6853 | 0.6842 | 0.4602 |
0.0288 | 17.0 | 2244 | 0.7530 | 0.5425 | 0.6484 | 0.5907 | 0.9572 | 0.5149 | 0.6446 | 0.8065 | 0.6796 | 0.6993 | 0.75 | 0.3974 |
0.0288 | 18.0 | 2376 | 0.7542 | 0.5815 | 0.6524 | 0.6149 | 0.9594 | 0.5306 | 0.6667 | 0.7353 | 0.6918 | 0.7077 | 0.7273 | 0.4706 |
0.0137 | 19.0 | 2508 | 0.7550 | 0.5529 | 0.6585 | 0.6011 | 0.9561 | 0.5333 | 0.6957 | 0.6765 | 0.6508 | 0.7746 | 0.7059 | 0.4389 |
0.0137 | 20.0 | 2640 | 0.6984 | 0.5335 | 0.6789 | 0.5975 | 0.9538 | 0.4828 | 0.6721 | 0.7353 | 0.6382 | 0.7518 | 0.6667 | 0.4731 |
0.0137 | 21.0 | 2772 | 0.6706 | 0.5221 | 0.7215 | 0.6058 | 0.9511 | 0.4640 | 0.6780 | 0.72 | 0.6389 | 0.7355 | 0.6667 | 0.5215 |
0.0137 | 22.0 | 2904 | 0.7129 | 0.5533 | 0.6646 | 0.6039 | 0.9561 | 0.5091 | 0.7 | 0.6667 | 0.6553 | 0.7761 | 0.6111 | 0.4673 |
0.0096 | 23.0 | 3036 | 0.7137 | 0.5601 | 0.6728 | 0.6113 | 0.9583 | 0.5185 | 0.6780 | 0.7879 | 0.6621 | 0.7328 | 0.6 | 0.4926 |
0.0096 | 24.0 | 3168 | 0.6871 | 0.5235 | 0.6789 | 0.5912 | 0.9534 | 0.4828 | 0.6891 | 0.6667 | 0.6414 | 0.7310 | 0.7273 | 0.4676 |
0.0096 | 25.0 | 3300 | 0.7823 | 0.5641 | 0.6524 | 0.6051 | 0.9567 | 0.4628 | 0.7009 | 0.7576 | 0.6716 | 0.7183 | 0.6875 | 0.4762 |
0.0096 | 26.0 | 3432 | 0.7905 | 0.5512 | 0.6565 | 0.5993 | 0.9556 | 0.5143 | 0.7368 | 0.7463 | 0.6332 | 0.7121 | 0.6875 | 0.4531 |
0.0061 | 27.0 | 3564 | 0.8666 | 0.5557 | 0.6585 | 0.6028 | 0.9553 | 0.4779 | 0.7130 | 0.7692 | 0.6689 | 0.7391 | 0.6667 | 0.4465 |
0.0061 | 28.0 | 3696 | 0.8848 | 0.5492 | 0.6240 | 0.5842 | 0.9552 | 0.4590 | 0.7273 | 0.7937 | 0.6227 | 0.7597 | 0.6250 | 0.4348 |
Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
- Downloads last month
- 67
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.