HorcruxNo13's picture
update model card README.md
55da7bf
|
raw
history blame
6.78 kB
metadata
license: other
tags:
  - generated_from_trainer
model-index:
  - name: segformer-b0-finetuned-segments-toolwear
    results: []

segformer-b0-finetuned-segments-toolwear

This model is a fine-tuned version of nvidia/mit-b0 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1141
  • Mean Iou: 0.4323
  • Mean Accuracy: 0.8645
  • Overall Accuracy: 0.8645
  • Accuracy Unlabeled: nan
  • Accuracy Tool: nan
  • Accuracy Wear: 0.8645
  • Iou Unlabeled: 0.0
  • Iou Tool: nan
  • Iou Wear: 0.8645

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 6e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Mean Iou Mean Accuracy Overall Accuracy Accuracy Unlabeled Accuracy Tool Accuracy Wear Iou Unlabeled Iou Tool Iou Wear
0.7016 1.82 20 0.9090 0.4940 0.9880 0.9880 nan nan 0.9880 0.0 nan 0.9880
0.5409 3.64 40 0.6405 0.4986 0.9972 0.9972 nan nan 0.9972 0.0 nan 0.9972
0.4261 5.45 60 0.4407 0.4846 0.9692 0.9692 nan nan 0.9692 0.0 nan 0.9692
0.3251 7.27 80 0.4075 0.4692 0.9383 0.9383 nan nan 0.9383 0.0 nan 0.9383
0.2993 9.09 100 0.3055 0.4739 0.9477 0.9477 nan nan 0.9477 0.0 nan 0.9477
0.2724 10.91 120 0.3326 0.4759 0.9518 0.9518 nan nan 0.9518 0.0 nan 0.9518
0.2154 12.73 140 0.3281 0.4786 0.9573 0.9573 nan nan 0.9573 0.0 nan 0.9573
0.1732 14.55 160 0.2322 0.4415 0.8831 0.8831 nan nan 0.8831 0.0 nan 0.8831
0.1376 16.36 180 0.2063 0.3969 0.7937 0.7937 nan nan 0.7937 0.0 nan 0.7937
0.1326 18.18 200 0.2147 0.4613 0.9226 0.9226 nan nan 0.9226 0.0 nan 0.9226
0.1333 20.0 220 0.1711 0.4373 0.8747 0.8747 nan nan 0.8747 0.0 nan 0.8747
0.1235 21.82 240 0.1550 0.4374 0.8748 0.8748 nan nan 0.8748 0.0 nan 0.8748
0.0976 23.64 260 0.1640 0.4373 0.8745 0.8745 nan nan 0.8745 0.0 nan 0.8745
0.078 25.45 280 0.1463 0.4505 0.9010 0.9010 nan nan 0.9010 0.0 nan 0.9010
0.0753 27.27 300 0.1395 0.4387 0.8774 0.8774 nan nan 0.8774 0.0 nan 0.8774
0.0703 29.09 320 0.1529 0.4550 0.9100 0.9100 nan nan 0.9100 0.0 nan 0.9100
0.0665 30.91 340 0.1336 0.4414 0.8828 0.8828 nan nan 0.8828 0.0 nan 0.8828
0.0606 32.73 360 0.1320 0.4484 0.8968 0.8968 nan nan 0.8968 0.0 nan 0.8968
0.0814 34.55 380 0.1215 0.4220 0.8439 0.8439 nan nan 0.8439 0.0 nan 0.8439
0.0578 36.36 400 0.1194 0.4266 0.8531 0.8531 nan nan 0.8531 0.0 nan 0.8531
0.0511 38.18 420 0.1232 0.4417 0.8835 0.8835 nan nan 0.8835 0.0 nan 0.8835
0.0471 40.0 440 0.1182 0.4409 0.8817 0.8817 nan nan 0.8817 0.0 nan 0.8817
0.0484 41.82 460 0.1084 0.4258 0.8515 0.8515 nan nan 0.8515 0.0 nan 0.8515
0.0497 43.64 480 0.1212 0.4425 0.8850 0.8850 nan nan 0.8850 0.0 nan 0.8850
0.0624 45.45 500 0.1071 0.4266 0.8531 0.8531 nan nan 0.8531 0.0 nan 0.8531
0.0509 47.27 520 0.1157 0.4339 0.8678 0.8678 nan nan 0.8678 0.0 nan 0.8678
0.0496 49.09 540 0.1141 0.4323 0.8645 0.8645 nan nan 0.8645 0.0 nan 0.8645

Framework versions

  • Transformers 4.28.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3