timm
/

Image Classification
timm
PyTorch
Safetensors
rwightman HF staff commited on
Commit
1a15a76
1 Parent(s): cfeecb8
Files changed (4) hide show
  1. README.md +164 -0
  2. config.json +40 -0
  3. model.safetensors +3 -0
  4. pytorch_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - image-classification
4
+ - timm
5
+ library_name: timm
6
+ license: apache-2.0
7
+ datasets:
8
+ - imagenet-1k
9
+ - imagenet-12k
10
+ ---
11
+ # Model card for mambaout_base_plus_rw.sw_e150_in12k_ft_in1k
12
+
13
+ A MambaOut image classification model with `timm` specific architecture customizations. Pretrained on ImageNet-12k and fine-tuned on ImageNet-1k by Ross Wightman using Swin / ConvNeXt based recipe.
14
+
15
+
16
+ ## Model Details
17
+ - **Model Type:** Image classification / feature backbone
18
+ - **Model Stats:**
19
+ - Params (M): 101.7
20
+ - GMACs: 19.2
21
+ - Activations (M): 45.2
22
+ - Image size: train = 224 x 224, test = 288 x 288
23
+ - **Pretrain Dataset:** ImageNet-12k
24
+ - **Dataset:** ImageNet-1k
25
+ - **Papers:**
26
+ - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models
27
+ - MambaOut: Do We Really Need Mamba for Vision?: https://arxiv.org/abs/2405.07992
28
+ - **Original:** https://github.com/yuweihao/MambaOut
29
+
30
+ ## Model Usage
31
+ ### Image Classification
32
+ ```python
33
+ from urllib.request import urlopen
34
+ from PIL import Image
35
+ import timm
36
+
37
+ img = Image.open(urlopen(
38
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
39
+ ))
40
+
41
+ model = timm.create_model('mambaout_base_plus_rw.sw_e150_in12k_ft_in1k', pretrained=True)
42
+ model = model.eval()
43
+
44
+ # get model specific transforms (normalization, resize)
45
+ data_config = timm.data.resolve_model_data_config(model)
46
+ transforms = timm.data.create_transform(**data_config, is_training=False)
47
+
48
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
49
+
50
+ top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
51
+ ```
52
+
53
+ ### Feature Map Extraction
54
+ ```python
55
+ from urllib.request import urlopen
56
+ from PIL import Image
57
+ import timm
58
+
59
+ img = Image.open(urlopen(
60
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
61
+ ))
62
+
63
+ model = timm.create_model(
64
+ 'mambaout_base_plus_rw.sw_e150_in12k_ft_in1k',
65
+ pretrained=True,
66
+ features_only=True,
67
+ )
68
+ model = model.eval()
69
+
70
+ # get model specific transforms (normalization, resize)
71
+ data_config = timm.data.resolve_model_data_config(model)
72
+ transforms = timm.data.create_transform(**data_config, is_training=False)
73
+
74
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
75
+
76
+ for o in output:
77
+ # print shape of each feature map in output
78
+ # e.g.:
79
+ # torch.Size([1, 56, 56, 128])
80
+ # torch.Size([1, 28, 28, 256])
81
+ # torch.Size([1, 14, 14, 512])
82
+ # torch.Size([1, 7, 7, 768])
83
+
84
+ print(o.shape)
85
+ ```
86
+
87
+ ### Image Embeddings
88
+ ```python
89
+ from urllib.request import urlopen
90
+ from PIL import Image
91
+ import timm
92
+
93
+ img = Image.open(urlopen(
94
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
95
+ ))
96
+
97
+ model = timm.create_model(
98
+ 'mambaout_base_plus_rw.sw_e150_in12k_ft_in1k',
99
+ pretrained=True,
100
+ num_classes=0, # remove classifier nn.Linear
101
+ )
102
+ model = model.eval()
103
+
104
+ # get model specific transforms (normalization, resize)
105
+ data_config = timm.data.resolve_model_data_config(model)
106
+ transforms = timm.data.create_transform(**data_config, is_training=False)
107
+
108
+ output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
109
+
110
+ # or equivalently (without needing to set num_classes=0)
111
+
112
+ output = model.forward_features(transforms(img).unsqueeze(0))
113
+ # output is unpooled, a (1, 7, 7, 768) shaped tensor
114
+
115
+ output = model.forward_head(output, pre_logits=True)
116
+ # output is a (1, num_features) shaped tensor
117
+ ```
118
+
119
+ ## Model Comparison
120
+ ### By Top-1
121
+
122
+ |model |img_size|top1 |top5 |param_count|
123
+ |---------------------------------------------------------------------------------------------------------------------|--------|------|------|-----------|
124
+ |[mambaout_base_plus_rw.sw_e150_in12k_ft_in1k](http://huggingface.co/timm/mambaout_base_plus_rw.sw_e150_in12k_ft_in1k)|288 |86.912|98.236|101.66 |
125
+ |[mambaout_base_plus_rw.sw_e150_in12k_ft_in1k](http://huggingface.co/timm/mambaout_base_plus_rw.sw_e150_in12k_ft_in1k)|224 |86.632|98.156|101.66 |
126
+ |[mambaout_base_tall_rw.sw_e500_in1k](http://huggingface.co/timm/mambaout_base_tall_rw.sw_e500_in1k) |288 |84.974|97.332|86.48 |
127
+ |[mambaout_base_wide_rw.sw_e500_in1k](http://huggingface.co/timm/mambaout_base_wide_rw.sw_e500_in1k) |288 |84.962|97.208|94.45 |
128
+ |[mambaout_base_short_rw.sw_e500_in1k](http://huggingface.co/timm/mambaout_base_short_rw.sw_e500_in1k) |288 |84.832|97.27 |88.83 |
129
+ |[mambaout_base.in1k](http://huggingface.co/timm/mambaout_base.in1k) |288 |84.72 |96.93 |84.81 |
130
+ |[mambaout_small_rw.sw_e450_in1k](http://huggingface.co/timm/mambaout_small_rw.sw_e450_in1k) |288 |84.598|97.098|48.5 |
131
+ |[mambaout_small.in1k](http://huggingface.co/timm/mambaout_small.in1k) |288 |84.5 |96.974|48.49 |
132
+ |[mambaout_base_wide_rw.sw_e500_in1k](http://huggingface.co/timm/mambaout_base_wide_rw.sw_e500_in1k) |224 |84.454|96.864|94.45 |
133
+ |[mambaout_base_tall_rw.sw_e500_in1k](http://huggingface.co/timm/mambaout_base_tall_rw.sw_e500_in1k) |224 |84.434|96.958|86.48 |
134
+ |[mambaout_base_short_rw.sw_e500_in1k](http://huggingface.co/timm/mambaout_base_short_rw.sw_e500_in1k) |224 |84.362|96.952|88.83 |
135
+ |[mambaout_base.in1k](http://huggingface.co/timm/mambaout_base.in1k) |224 |84.168|96.68 |84.81 |
136
+ |[mambaout_small.in1k](http://huggingface.co/timm/mambaout_small.in1k) |224 |84.086|96.63 |48.49 |
137
+ |[mambaout_small_rw.sw_e450_in1k](http://huggingface.co/timm/mambaout_small_rw.sw_e450_in1k) |224 |84.024|96.752|48.5 |
138
+ |[mambaout_tiny.in1k](http://huggingface.co/timm/mambaout_tiny.in1k) |288 |83.448|96.538|26.55 |
139
+ |[mambaout_tiny.in1k](http://huggingface.co/timm/mambaout_tiny.in1k) |224 |82.736|96.1 |26.55 |
140
+ |[mambaout_kobe.in1k](http://huggingface.co/timm/mambaout_kobe.in1k) |288 |81.054|95.718|9.14 |
141
+ |[mambaout_kobe.in1k](http://huggingface.co/timm/mambaout_kobe.in1k) |224 |79.986|94.986|9.14 |
142
+ |[mambaout_femto.in1k](http://huggingface.co/timm/mambaout_femto.in1k) |288 |79.848|95.14 |7.3 |
143
+ |[mambaout_femto.in1k](http://huggingface.co/timm/mambaout_femto.in1k) |224 |78.87 |94.408|7.3 |
144
+
145
+ ## Citation
146
+ ```bibtex
147
+ @misc{rw2019timm,
148
+ author = {Ross Wightman},
149
+ title = {PyTorch Image Models},
150
+ year = {2019},
151
+ publisher = {GitHub},
152
+ journal = {GitHub repository},
153
+ doi = {10.5281/zenodo.4414861},
154
+ howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
155
+ }
156
+ ```
157
+ ```bibtex
158
+ @article{yu2024mambaout,
159
+ title={MambaOut: Do We Really Need Mamba for Vision?},
160
+ author={Yu, Weihao and Wang, Xinchao},
161
+ journal={arXiv preprint arXiv:2405.07992},
162
+ year={2024}
163
+ }
164
+ ```
config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architecture": "mambaout_base_plus_rw",
3
+ "num_classes": 1000,
4
+ "num_features": 768,
5
+ "pretrained_cfg": {
6
+ "tag": "sw_e150_in12k_ft_in1k",
7
+ "custom_load": false,
8
+ "input_size": [
9
+ 3,
10
+ 224,
11
+ 224
12
+ ],
13
+ "test_input_size": [
14
+ 3,
15
+ 288,
16
+ 288
17
+ ],
18
+ "fixed_input_size": false,
19
+ "interpolation": "bicubic",
20
+ "crop_pct": 1.0,
21
+ "crop_mode": "center",
22
+ "mean": [
23
+ 0.485,
24
+ 0.456,
25
+ 0.406
26
+ ],
27
+ "std": [
28
+ 0.229,
29
+ 0.224,
30
+ 0.225
31
+ ],
32
+ "num_classes": 1000,
33
+ "pool_size": [
34
+ 7,
35
+ 7
36
+ ],
37
+ "first_conv": "stem.conv1",
38
+ "classifier": "head.fc"
39
+ }
40
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96768938fe9025f1619d12cc3ab5ce303b7fb91bf4ee6ade4608924746ad2da7
3
+ size 406664120
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:931894c1de3c5a1fee42642a194168cc50c4535871ecb193eb59c848e0f36589
3
+ size 406761482