salbatarni's picture
End of training
6822146 verified
|
raw
history blame
3.31 kB
metadata
base_model: aubmindlab/bert-base-arabertv02
tags:
  - generated_from_trainer
model-index:
  - name: arabert_cross_vocabulary_task7_fold4
    results: []

arabert_cross_vocabulary_task7_fold4

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9341
  • Qwk: 0.8036
  • Mse: 0.9341

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.0351 2 3.7363 0.0136 3.7363
No log 0.0702 4 2.3106 0.1619 2.3106
No log 0.1053 6 1.8172 0.1692 1.8172
No log 0.1404 8 1.6116 0.2787 1.6116
No log 0.1754 10 1.8910 0.3258 1.8910
No log 0.2105 12 1.8785 0.4430 1.8785
No log 0.2456 14 1.8131 0.4299 1.8131
No log 0.2807 16 2.0288 0.5063 2.0288
No log 0.3158 18 1.6252 0.5783 1.6252
No log 0.3509 20 1.4668 0.6465 1.4668
No log 0.3860 22 1.1133 0.6929 1.1133
No log 0.4211 24 0.8668 0.7249 0.8668
No log 0.4561 26 0.9262 0.7457 0.9262
No log 0.4912 28 1.0096 0.7462 1.0096
No log 0.5263 30 1.0968 0.7380 1.0968
No log 0.5614 32 1.1232 0.7473 1.1232
No log 0.5965 34 1.1632 0.7453 1.1632
No log 0.6316 36 1.1137 0.7662 1.1137
No log 0.6667 38 1.0328 0.7794 1.0328
No log 0.7018 40 0.8546 0.8110 0.8546
No log 0.7368 42 0.7276 0.7840 0.7276
No log 0.7719 44 0.6916 0.7793 0.6916
No log 0.8070 46 0.7025 0.7855 0.7025
No log 0.8421 48 0.7202 0.7971 0.7202
No log 0.8772 50 0.7770 0.8095 0.7770
No log 0.9123 52 0.8545 0.8154 0.8545
No log 0.9474 54 0.9105 0.8088 0.9105
No log 0.9825 56 0.9341 0.8036 0.9341

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1