Edit model card

rubert-tiny2-srl

This model is a fine-tuned version of cointegrated/rubert-tiny2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1428
  • Addressee Precision: 0.6364
  • Addressee Recall: 0.875
  • Addressee F1: 0.7368
  • Addressee Number: 8
  • Benefactive Precision: 0.0
  • Benefactive Recall: 0.0
  • Benefactive F1: 0.0
  • Benefactive Number: 2
  • Causator Precision: 0.9286
  • Causator Recall: 0.8125
  • Causator F1: 0.8667
  • Causator Number: 16
  • Cause Precision: 0.6
  • Cause Recall: 0.25
  • Cause F1: 0.3529
  • Cause Number: 12
  • Contrsubject Precision: 0.6364
  • Contrsubject Recall: 0.4118
  • Contrsubject F1: 0.5
  • Contrsubject Number: 17
  • Deliberative Precision: 1.0
  • Deliberative Recall: 0.6667
  • Deliberative F1: 0.8
  • Deliberative Number: 6
  • Destinative Precision: 1.0
  • Destinative Recall: 0.5
  • Destinative F1: 0.6667
  • Destinative Number: 4
  • Directivefinal Precision: 1.0
  • Directivefinal Recall: 1.0
  • Directivefinal F1: 1.0
  • Directivefinal Number: 2
  • Experiencer Precision: 0.8018
  • Experiencer Recall: 0.9368
  • Experiencer F1: 0.8641
  • Experiencer Number: 95
  • Instrument Precision: 0.0
  • Instrument Recall: 0.0
  • Instrument F1: 0.0
  • Instrument Number: 3
  • Limitative Precision: 0.0
  • Limitative Recall: 0.0
  • Limitative F1: 0.0
  • Limitative Number: 1
  • Object Precision: 0.7589
  • Object Recall: 0.8
  • Object F1: 0.7789
  • Object Number: 240
  • Overall Precision: 0.7724
  • Overall Recall: 0.7857
  • Overall F1: 0.7790
  • Overall Accuracy: 0.9589

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 8.017672397578385e-05
  • train_batch_size: 4
  • eval_batch_size: 1
  • seed: 678943
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.04
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss Addressee Precision Addressee Recall Addressee F1 Addressee Number Benefactive Precision Benefactive Recall Benefactive F1 Benefactive Number Causator Precision Causator Recall Causator F1 Causator Number Cause Precision Cause Recall Cause F1 Cause Number Contrsubject Precision Contrsubject Recall Contrsubject F1 Contrsubject Number Deliberative Precision Deliberative Recall Deliberative F1 Deliberative Number Destinative Precision Destinative Recall Destinative F1 Destinative Number Directivefinal Precision Directivefinal Recall Directivefinal F1 Directivefinal Number Experiencer Precision Experiencer Recall Experiencer F1 Experiencer Number Instrument Precision Instrument Recall Instrument F1 Instrument Number Limitative Precision Limitative Recall Limitative F1 Limitative Number Object Precision Object Recall Object F1 Object Number Overall Precision Overall Recall Overall F1 Overall Accuracy
0.2206 1.0 490 0.1959 0.6667 0.25 0.3636 8 0.0 0.0 0.0 2 0.8667 0.8125 0.8387 16 0.0 0.0 0.0 12 1.0 0.0588 0.1111 17 0.0 0.0 0.0 6 0.0 0.0 0.0 4 0.0 0.0 0.0 2 0.7203 0.8947 0.7981 95 0.0 0.0 0.0 3 0.0 0.0 0.0 1 0.6692 0.725 0.696 240 0.6927 0.6773 0.6849 0.9445
0.1507 2.0 981 0.1492 0.5556 0.625 0.5882 8 0.0 0.0 0.0 2 0.8667 0.8125 0.8387 16 0.6 0.25 0.3529 12 0.75 0.3529 0.48 17 1.0 0.1667 0.2857 6 1.0 0.25 0.4 4 1.0 1.0 1.0 2 0.8646 0.8737 0.8691 95 0.0 0.0 0.0 3 0.0 0.0 0.0 1 0.7635 0.7667 0.7651 240 0.7884 0.7340 0.7602 0.9566
0.1146 3.0 1472 0.1437 0.6364 0.875 0.7368 8 0.0 0.0 0.0 2 0.9286 0.8125 0.8667 16 0.6 0.25 0.3529 12 0.6429 0.5294 0.5806 17 1.0 0.5 0.6667 6 1.0 0.5 0.6667 4 1.0 1.0 1.0 2 0.8 0.9263 0.8585 95 0.0 0.0 0.0 3 0.0 0.0 0.0 1 0.7443 0.8125 0.7769 240 0.7612 0.7931 0.7768 0.9584
0.0842 3.99 1960 0.1428 0.6364 0.875 0.7368 8 0.0 0.0 0.0 2 0.9286 0.8125 0.8667 16 0.6 0.25 0.3529 12 0.6364 0.4118 0.5 17 1.0 0.6667 0.8 6 1.0 0.5 0.6667 4 1.0 1.0 1.0 2 0.8018 0.9368 0.8641 95 0.0 0.0 0.0 3 0.0 0.0 0.0 1 0.7589 0.8 0.7789 240 0.7724 0.7857 0.7790 0.9589

Framework versions

  • Transformers 4.33.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for dl-ru/rubert-tiny2-srl

Finetuned
(36)
this model

Space using dl-ru/rubert-tiny2-srl 1