Edit model card

Visualize in Weights & Biases

aoi_clip_high_resolution_text_only_gpt_new_sampler

This model is a fine-tuned version of OFA-Sys/chinese-clip-vit-base-patch16 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.2382
  • Accuracy: 0.1291

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 50
  • eval_batch_size: 20
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 200
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.5113 9.9839 3110 2.8518 0.1566
2.3003 19.9679 6220 3.0242 0.1476
2.2124 29.9518 9330 3.1246 0.1404
2.1682 39.9358 12440 3.1806 0.1371
2.1384 49.9197 15550 3.2024 0.1342
2.1246 59.9037 18660 3.2168 0.1323
2.1121 69.8876 21770 3.2209 0.1313
2.0987 79.8716 24880 3.2137 0.1307
2.0986 89.8555 27990 3.2374 0.1301
2.0962 99.8395 31100 3.2382 0.1295

Framework versions

  • Transformers 4.42.3
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
1
Safetensors
Model size
204M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for sharkMeow/aoi_clip_high_resolution_text_only_gpt_new_sampler

Finetuned
(31)
this model