Tajik Language Models
Collection
17 items
•
Updated
This model is a fine-tuned version of muhtasham/RoBERTa-tg on the wikiann dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
No log | 2.0 | 50 | 0.7710 | 0.0532 | 0.1827 | 0.0824 | 0.6933 |
No log | 4.0 | 100 | 0.5901 | 0.0847 | 0.25 | 0.1265 | 0.7825 |
No log | 6.0 | 150 | 0.5226 | 0.2087 | 0.4615 | 0.2874 | 0.8186 |
No log | 8.0 | 200 | 0.5041 | 0.2585 | 0.5096 | 0.3430 | 0.8449 |
No log | 10.0 | 250 | 0.5592 | 0.2819 | 0.5096 | 0.3630 | 0.8499 |
No log | 12.0 | 300 | 0.5725 | 0.3032 | 0.5481 | 0.3904 | 0.8558 |
No log | 14.0 | 350 | 0.6433 | 0.3122 | 0.5673 | 0.4027 | 0.8508 |
No log | 16.0 | 400 | 0.6744 | 0.3543 | 0.5962 | 0.4444 | 0.8553 |
No log | 18.0 | 450 | 0.7617 | 0.3353 | 0.5577 | 0.4188 | 0.8335 |
0.2508 | 20.0 | 500 | 0.7608 | 0.3262 | 0.5865 | 0.4192 | 0.8419 |
0.2508 | 22.0 | 550 | 0.8483 | 0.3224 | 0.5673 | 0.4111 | 0.8494 |
0.2508 | 24.0 | 600 | 0.8370 | 0.3275 | 0.5385 | 0.4073 | 0.8439 |
0.2508 | 26.0 | 650 | 0.8652 | 0.3410 | 0.5673 | 0.4260 | 0.8394 |
0.2508 | 28.0 | 700 | 0.9441 | 0.3409 | 0.5769 | 0.4286 | 0.8216 |
0.2508 | 30.0 | 750 | 0.9228 | 0.3333 | 0.5577 | 0.4173 | 0.8439 |
0.2508 | 32.0 | 800 | 0.9175 | 0.3430 | 0.5673 | 0.4275 | 0.8355 |
0.2508 | 34.0 | 850 | 0.9603 | 0.3073 | 0.5288 | 0.3887 | 0.8340 |
0.2508 | 36.0 | 900 | 0.9417 | 0.3240 | 0.5577 | 0.4099 | 0.8370 |
0.2508 | 38.0 | 950 | 0.9408 | 0.3155 | 0.5673 | 0.4055 | 0.8360 |