sanali209 commited on
Commit
d0550e5
1 Parent(s): 6ef1950

sanali209/reitBF

Browse files
Files changed (5) hide show
  1. README.md +50 -20
  2. config.json +4 -4
  3. model.safetensors +1 -1
  4. preprocessor_config.json +1 -15
  5. training_args.bin +3 -0
README.md CHANGED
@@ -1,31 +1,61 @@
1
  ---
 
2
  tags:
3
- - image-classification
4
- - pytorch
5
- - huggingpics
6
- metrics:
7
- - accuracy
8
-
9
  model-index:
10
- - name: sanali209/reitBF
11
- results:
12
- - task:
13
- name: Image Classification
14
- type: image-classification
15
- metrics:
16
- - name: Accuracy
17
- type: accuracy
18
- value: 0.8446534276008606
19
  ---
20
 
21
- # sanali209/reitBF
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
 
 
 
 
 
 
 
 
23
 
24
- Autogenerated by HuggingPics🤗🖼️
25
 
26
- Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
 
 
 
 
 
27
 
28
- Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
29
 
 
30
 
31
- ## Example Images
 
 
 
 
1
  ---
2
+ library_name: transformers
3
  tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - imagefolder
 
 
 
7
  model-index:
8
+ - name: reitBF
9
+ results: []
 
 
 
 
 
 
 
10
  ---
11
 
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # reitBF
16
+
17
+ This model was trained from scratch on the imagefolder dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 1.0813
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
 
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 2e-05
39
+ - train_batch_size: 32
40
+ - eval_batch_size: 16
41
+ - seed: 42
42
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
+ - lr_scheduler_type: linear
44
+ - num_epochs: 4
45
 
46
+ ### Training results
47
 
48
+ | Training Loss | Epoch | Step | Validation Loss |
49
+ |:-------------:|:-----:|:----:|:---------------:|
50
+ | No log | 1.0 | 47 | 1.3983 |
51
+ | No log | 2.0 | 94 | 1.1390 |
52
+ | No log | 3.0 | 141 | 1.0888 |
53
+ | No log | 4.0 | 188 | 1.0813 |
54
 
 
55
 
56
+ ### Framework versions
57
 
58
+ - Transformers 4.44.2
59
+ - Pytorch 2.4.0+cu121
60
+ - Datasets 3.0.0
61
+ - Tokenizers 0.19.1
config.json CHANGED
@@ -17,9 +17,9 @@
17
  "initializer_range": 0.02,
18
  "intermediate_size": 3072,
19
  "label2id": {
20
- "high": "0",
21
- "low": "1",
22
- "normal": "2"
23
  },
24
  "layer_norm_eps": 1e-12,
25
  "model_type": "vit",
@@ -30,5 +30,5 @@
30
  "problem_type": "single_label_classification",
31
  "qkv_bias": true,
32
  "torch_dtype": "float32",
33
- "transformers_version": "4.41.2"
34
  }
 
17
  "initializer_range": 0.02,
18
  "intermediate_size": 3072,
19
  "label2id": {
20
+ "high": 0,
21
+ "low": 1,
22
+ "normal": 2
23
  },
24
  "layer_norm_eps": 1e-12,
25
  "model_type": "vit",
 
30
  "problem_type": "single_label_classification",
31
  "qkv_bias": true,
32
  "torch_dtype": "float32",
33
+ "transformers_version": "4.44.2"
34
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7455148989b872a396003cf8cdd5907a5e9473e1b83d3be62f50cccceb5c0ff3
3
  size 343227052
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:290b15fe457b05b51b2aead184578ab5a7b5968e0c316c5ca3e47cdb48f8eaad
3
  size 343227052
preprocessor_config.json CHANGED
@@ -1,18 +1,4 @@
1
  {
2
- "_valid_processor_keys": [
3
- "images",
4
- "do_resize",
5
- "size",
6
- "resample",
7
- "do_rescale",
8
- "rescale_factor",
9
- "do_normalize",
10
- "image_mean",
11
- "image_std",
12
- "return_tensors",
13
- "data_format",
14
- "input_data_format"
15
- ],
16
  "do_normalize": true,
17
  "do_rescale": true,
18
  "do_resize": true,
@@ -21,7 +7,7 @@
21
  0.5,
22
  0.5
23
  ],
24
- "image_processor_type": "ViTFeatureExtractor",
25
  "image_std": [
26
  0.5,
27
  0.5,
 
1
  {
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  "do_normalize": true,
3
  "do_rescale": true,
4
  "do_resize": true,
 
7
  0.5,
8
  0.5
9
  ],
10
+ "image_processor_type": "ViTImageProcessor",
11
  "image_std": [
12
  0.5,
13
  0.5,
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a0785acff040118a0b3b1d19a214ad4d1bacf02743803535ee3ae239642b7da
3
+ size 5176