DaiShiResearch
commited on
Commit
•
1449fdf
1
Parent(s):
3164fd5
Update README.md
Browse files
README.md
CHANGED
@@ -61,8 +61,8 @@ for ["TransNeXt: Robust Foveal Visual Perception for Vision Transformers"](https
|
|
61 |
|
62 |
| Model | #Params | #FLOPs |IN-1K | IN-A |IN-R|Sketch|IN-V2| Download |Config|
|
63 |
|:---:|:---:|:---:|:---:| :---:|:---:|:---:| :---:|:---:|:---:|
|
64 |
-
| TransNeXt-Small |49.7M|32.1G| 86.0| 58.3|56.4|43.2|76.8| [model](https://huggingface.co/DaiShiResearch/transnext-small-384-1k-ft-1k/resolve/main/transnext_small_384_1k_ft_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/finetune/
|
65 |
-
| TransNeXt-Base |89.7M|56.3G| 86.2| 61.6|57.7|44.7|77.0| [model](https://huggingface.co/DaiShiResearch/transnext-base-384-1k-ft-1k/resolve/main/transnext_base_384_1k_ft_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/finetune/
|
66 |
|
67 |
**ImageNet-1K 256x256 pre-trained model fully utilizing aggregated attention at all stages:**
|
68 |
|
|
|
61 |
|
62 |
| Model | #Params | #FLOPs |IN-1K | IN-A |IN-R|Sketch|IN-V2| Download |Config|
|
63 |
|:---:|:---:|:---:|:---:| :---:|:---:|:---:| :---:|:---:|:---:|
|
64 |
+
| TransNeXt-Small |49.7M|32.1G| 86.0| 58.3|56.4|43.2|76.8| [model](https://huggingface.co/DaiShiResearch/transnext-small-384-1k-ft-1k/resolve/main/transnext_small_384_1k_ft_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/finetune/transnext_small_384_ft.py)|
|
65 |
+
| TransNeXt-Base |89.7M|56.3G| 86.2| 61.6|57.7|44.7|77.0| [model](https://huggingface.co/DaiShiResearch/transnext-base-384-1k-ft-1k/resolve/main/transnext_base_384_1k_ft_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/finetune/transnext_base_384_ft.py)|
|
66 |
|
67 |
**ImageNet-1K 256x256 pre-trained model fully utilizing aggregated attention at all stages:**
|
68 |
|