Edit model card

EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-09-10_txt_vis_concat_gate

This model is a fine-tuned version of microsoft/layoutlmv3-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9407
  • Accuracy: 0.78
  • Exit 0 Accuracy: 0.0625

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 24
  • total_train_batch_size: 48
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 60

Training results

Training Loss Epoch Step Validation Loss Accuracy Exit 0 Accuracy
No log 0.96 16 2.6995 0.13 0.0625
No log 1.98 33 2.5653 0.21 0.06
No log 3.0 50 2.3927 0.2825 0.07
No log 3.96 66 2.2103 0.345 0.075
No log 4.98 83 2.0217 0.4525 0.065
No log 6.0 100 1.8175 0.5325 0.06
No log 6.96 116 1.6096 0.5875 0.0625
No log 7.98 133 1.4160 0.6375 0.0625
No log 9.0 150 1.3283 0.6575 0.0625
No log 9.96 166 1.2253 0.7 0.0625
No log 10.98 183 1.1531 0.7225 0.0625
No log 12.0 200 1.0661 0.7375 0.0625
No log 12.96 216 1.0565 0.73 0.0625
No log 13.98 233 1.0281 0.73 0.0625
No log 15.0 250 1.0459 0.7275 0.0625
No log 15.96 266 0.9802 0.75 0.0625
No log 16.98 283 0.9665 0.7525 0.0625
No log 18.0 300 0.9655 0.7475 0.0625
No log 18.96 316 0.9463 0.7675 0.0625
No log 19.98 333 0.9392 0.765 0.0625
No log 21.0 350 0.9768 0.75 0.0625
No log 21.96 366 0.9973 0.7525 0.0625
No log 22.98 383 0.9660 0.765 0.0625
No log 24.0 400 1.0065 0.7475 0.0625
No log 24.96 416 0.9077 0.7825 0.0625
No log 25.98 433 0.9568 0.775 0.0625
No log 27.0 450 0.9389 0.775 0.0625
No log 27.96 466 0.9266 0.78 0.0625
No log 28.98 483 0.9301 0.7825 0.0625
0.5845 30.0 500 0.9220 0.785 0.0625
0.5845 30.96 516 0.9563 0.77 0.0625
0.5845 31.98 533 0.9272 0.785 0.0625
0.5845 33.0 550 0.9430 0.7775 0.0625
0.5845 33.96 566 0.9525 0.78 0.0625
0.5845 34.98 583 0.9190 0.7975 0.0625
0.5845 36.0 600 0.9416 0.765 0.0625
0.5845 36.96 616 0.9286 0.7825 0.0625
0.5845 37.98 633 0.9411 0.775 0.0625
0.5845 39.0 650 0.9468 0.77 0.0625
0.5845 39.96 666 0.9305 0.7825 0.0625
0.5845 40.98 683 0.9428 0.775 0.0625
0.5845 42.0 700 0.9484 0.78 0.0625
0.5845 42.96 716 0.9411 0.7825 0.0625
0.5845 43.98 733 0.9564 0.775 0.0625
0.5845 45.0 750 0.9293 0.785 0.0625
0.5845 45.96 766 0.9578 0.78 0.0625
0.5845 46.98 783 0.9377 0.79 0.0625
0.5845 48.0 800 0.9417 0.78 0.0625
0.5845 48.96 816 0.9495 0.7825 0.0625
0.5845 49.98 833 0.9401 0.7875 0.0625
0.5845 51.0 850 0.9458 0.7875 0.0625
0.5845 51.96 866 0.9468 0.7875 0.0625
0.5845 52.98 883 0.9341 0.79 0.0625
0.5845 54.0 900 0.9344 0.7875 0.0625
0.5845 54.96 916 0.9350 0.785 0.0625
0.5845 55.98 933 0.9391 0.78 0.0625
0.5845 57.0 950 0.9408 0.78 0.0625
0.5845 57.6 960 0.9407 0.78 0.0625

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu117
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
3
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for Omar95farag/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-09-10_txt_vis_concat_gate

Finetuned
this model