Edit model card

vit-base-patch16-224-Trial007-YEL_STEM4

This model is a fine-tuned version of google/vit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1948
  • Accuracy: 1.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 60
  • eval_batch_size: 60
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 240
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.8588 0.89 2 0.7925 0.4815
0.7235 1.78 4 0.6471 0.6852
0.6009 2.67 6 0.5246 0.7222
0.4196 4.0 9 0.3422 0.9074
0.4022 4.89 11 0.3213 0.9259
0.3531 5.78 13 0.1948 1.0
0.3095 6.67 15 0.1196 1.0
0.283 8.0 18 0.0666 1.0
0.1607 8.89 20 0.0401 1.0
0.1459 9.78 22 0.0302 1.0
0.1325 10.67 24 0.0223 1.0
0.1362 12.0 27 0.0205 1.0
0.1623 12.89 29 0.0094 1.0
0.0974 13.78 31 0.0046 1.0
0.1077 14.67 33 0.0054 1.0
0.0742 16.0 36 0.0040 1.0
0.1468 16.89 38 0.0030 1.0
0.077 17.78 40 0.0041 1.0
0.0907 18.67 42 0.0109 1.0
0.0363 20.0 45 0.0023 1.0
0.0519 20.89 47 0.0016 1.0
0.0672 21.78 49 0.0015 1.0
0.0894 22.67 51 0.0020 1.0
0.0267 24.0 54 0.0020 1.0
0.0639 24.89 56 0.0019 1.0
0.0675 25.78 58 0.0023 1.0
0.0508 26.67 60 0.0020 1.0
0.0509 28.0 63 0.0014 1.0
0.0573 28.89 65 0.0018 1.0
0.0584 29.78 67 0.0014 1.0
0.0657 30.67 69 0.0012 1.0
0.0635 32.0 72 0.0009 1.0
0.0617 32.89 74 0.0008 1.0
0.0614 33.78 76 0.0008 1.0
0.0614 34.67 78 0.0009 1.0
0.0618 36.0 81 0.0008 1.0
0.0384 36.89 83 0.0008 1.0
0.0565 37.78 85 0.0008 1.0
0.0784 38.67 87 0.0008 1.0
0.0313 40.0 90 0.0007 1.0
0.0496 40.89 92 0.0007 1.0
0.0273 41.78 94 0.0008 1.0
0.0448 42.67 96 0.0008 1.0
0.0948 44.0 99 0.0007 1.0
0.0371 44.44 100 0.0007 1.0

Framework versions

  • Transformers 4.30.0.dev0
  • Pytorch 1.12.1
  • Datasets 2.12.0
  • Tokenizers 0.13.1
Downloads last month
58
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results