vit-base-patch16-224-Trial008-YEL_STEM2
This model is a fine-tuned version of google/vit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:
- Loss: 0.1024
- Accuracy: 1.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 30
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 120
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
0.7598 | 0.67 | 1 | 0.6968 | 0.6471 |
0.7294 | 2.0 | 3 | 0.7662 | 0.4706 |
0.6662 | 2.67 | 4 | 0.7196 | 0.5882 |
0.5662 | 4.0 | 6 | 0.3941 | 0.8235 |
0.4781 | 4.67 | 7 | 0.3458 | 0.8235 |
0.3259 | 6.0 | 9 | 0.1699 | 0.9412 |
0.2903 | 6.67 | 10 | 0.1024 | 1.0 |
0.2206 | 8.0 | 12 | 0.0788 | 1.0 |
0.3215 | 8.67 | 13 | 0.0414 | 1.0 |
0.1741 | 10.0 | 15 | 0.0218 | 1.0 |
0.2222 | 10.67 | 16 | 0.0207 | 1.0 |
0.1534 | 12.0 | 18 | 0.0128 | 1.0 |
0.273 | 12.67 | 19 | 0.0103 | 1.0 |
0.2021 | 14.0 | 21 | 0.0080 | 1.0 |
0.2193 | 14.67 | 22 | 0.0100 | 1.0 |
0.2132 | 16.0 | 24 | 0.0247 | 1.0 |
0.2163 | 16.67 | 25 | 0.0266 | 1.0 |
0.1626 | 18.0 | 27 | 0.0101 | 1.0 |
0.2492 | 18.67 | 28 | 0.0059 | 1.0 |
0.1308 | 20.0 | 30 | 0.0056 | 1.0 |
0.2144 | 20.67 | 31 | 0.0060 | 1.0 |
0.1389 | 22.0 | 33 | 0.0044 | 1.0 |
0.2548 | 22.67 | 34 | 0.0040 | 1.0 |
0.1324 | 24.0 | 36 | 0.0037 | 1.0 |
0.1958 | 24.67 | 37 | 0.0036 | 1.0 |
0.2476 | 26.0 | 39 | 0.0035 | 1.0 |
0.1439 | 26.67 | 40 | 0.0033 | 1.0 |
0.1202 | 28.0 | 42 | 0.0030 | 1.0 |
0.1368 | 28.67 | 43 | 0.0028 | 1.0 |
0.1016 | 30.0 | 45 | 0.0027 | 1.0 |
0.1282 | 30.67 | 46 | 0.0027 | 1.0 |
0.1128 | 32.0 | 48 | 0.0026 | 1.0 |
0.2366 | 32.67 | 49 | 0.0026 | 1.0 |
0.1727 | 33.33 | 50 | 0.0026 | 1.0 |
Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.12.1
- Datasets 2.12.0
- Tokenizers 0.13.1
- Downloads last month
- 20
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.