Edit model card

swinv2-tiny-patch4-window8-256-dmae-humeda-DA-V2-c

This model is a fine-tuned version of microsoft/swinv2-tiny-patch4-window8-256 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9897
  • Accuracy: 0.7308

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 42

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.8421 4 1.5997 0.2308
No log 1.9474 9 1.5322 0.25
1.5784 2.8421 13 1.4223 0.4808
1.5784 3.9474 18 1.2367 0.5192
1.5784 4.8421 22 1.1217 0.5577
1.3137 5.9474 27 0.9917 0.5192
1.3137 6.8421 31 1.0075 0.5577
0.9943 7.9474 36 0.9245 0.5769
0.9943 8.8421 40 0.8960 0.6154
0.9943 9.9474 45 0.8786 0.5769
0.8055 10.8421 49 0.9115 0.5962
0.8055 11.9474 54 0.9065 0.6538
0.8055 12.8421 58 0.9131 0.5962
0.7128 13.9474 63 0.8910 0.6538
0.7128 14.8421 67 0.9413 0.6538
0.6511 15.9474 72 0.9045 0.7115
0.6511 16.8421 76 0.8853 0.7115
0.6511 17.9474 81 0.9149 0.7115
0.5837 18.8421 85 0.9372 0.7308
0.5837 19.9474 90 0.8941 0.7308
0.5837 20.8421 94 0.8905 0.75
0.5269 21.9474 99 0.8929 0.7308
0.5269 22.8421 103 0.9018 0.7308
0.4738 23.9474 108 0.9197 0.7308
0.4738 24.8421 112 0.9889 0.7308
0.4738 25.9474 117 0.9281 0.7308
0.4632 26.8421 121 0.9315 0.7308
0.4632 27.9474 126 1.0017 0.7115
0.4632 28.8421 130 0.9667 0.7115
0.4494 29.9474 135 0.9357 0.7308
0.4494 30.8421 139 0.9506 0.7308
0.423 31.9474 144 0.9922 0.7115
0.423 32.8421 148 1.0190 0.7115
0.423 33.9474 153 1.0155 0.7115
0.4223 34.8421 157 1.0041 0.7115
0.4223 35.9474 162 0.9904 0.7308
0.4223 36.8421 166 0.9897 0.7308
0.3989 37.3158 168 0.9897 0.7308

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.5.1+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
109
Safetensors
Model size
27.6M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Augusto777/swinv2-tiny-patch4-window8-256-dmae-humeda-DA-V2-c

Finetuned
(49)
this model