frgfm commited on
Commit
af5a939
1 Parent(s): 147462a

docs: Updated README

Browse files
Files changed (1) hide show
  1. README.md +116 -116
README.md CHANGED
@@ -1,116 +1,116 @@
1
- ---
2
- license: apache-2.0
3
- tags:
4
- - image-classification
5
- - pytorch
6
- - onnx
7
- datasets:
8
- - imagenette
9
- ---
10
-
11
-
12
- # ReXNet-1.5x model
13
-
14
- Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf).
15
-
16
-
17
- ## Model description
18
-
19
- The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
20
-
21
-
22
- ## Installation
23
-
24
- ### Prerequisites
25
-
26
- Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
27
-
28
- ### Latest stable release
29
-
30
- You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
31
-
32
- ```shell
33
- pip install pylocron
34
- ```
35
-
36
- or using [conda](https://anaconda.org/frgfm/pylocron):
37
-
38
- ```shell
39
- conda install -c frgfm pylocron
40
- ```
41
-
42
- ### Developer mode
43
-
44
- Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
45
-
46
- ```shell
47
- git clone https://github.com/frgfm/Holocron.git
48
- pip install -e Holocron/.
49
- ```
50
-
51
-
52
- ## Usage instructions
53
-
54
- ```python
55
- from PIL import Image
56
- from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
57
- from torchvision.transforms.functional import InterpolationMode
58
- from holocron.models import model_from_hf_hub
59
-
60
- model = model_from_hf_hub("frgfm/rexnet1_5x").eval()
61
-
62
- img = Image.open(path_to_an_image).convert("RGB")
63
-
64
- # Preprocessing
65
- config = model.default_cfg
66
- transform = Compose([
67
- Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
68
- PILToTensor(),
69
- ConvertImageDtype(torch.float32),
70
- Normalize(config['mean'], config['std'])
71
- ])
72
-
73
- input_tensor = transform(img).unsqueeze(0)
74
-
75
- # Inference
76
- with torch.inference_mode():
77
- output = model(input_tensor)
78
- probs = output.squeeze(0).softmax(dim=0)
79
- ```
80
-
81
-
82
- ## Citation
83
-
84
- Original paper
85
-
86
- ```bibtex
87
- @article{DBLP:journals/corr/abs-2007-00992,
88
- author = {Dongyoon Han and
89
- Sangdoo Yun and
90
- Byeongho Heo and
91
- Young Joon Yoo},
92
- title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural
93
- Network},
94
- journal = {CoRR},
95
- volume = {abs/2007.00992},
96
- year = {2020},
97
- url = {https://arxiv.org/abs/2007.00992},
98
- eprinttype = {arXiv},
99
- eprint = {2007.00992},
100
- timestamp = {Mon, 06 Jul 2020 15:26:01 +0200},
101
- biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib},
102
- bibsource = {dblp computer science bibliography, https://dblp.org}
103
- }
104
- ```
105
-
106
- Source of this implementation
107
-
108
- ```bibtex
109
- @software{Fernandez_Holocron_2020,
110
- author = {Fernandez, François-Guillaume},
111
- month = {5},
112
- title = {{Holocron}},
113
- url = {https://github.com/frgfm/Holocron},
114
- year = {2020}
115
- }
116
- ```
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - image-classification
5
+ - pytorch
6
+ - onnx
7
+ datasets:
8
+ - frgfm/imagenette
9
+ ---
10
+
11
+
12
+ # ReXNet-1.5x model
13
+
14
+ Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf).
15
+
16
+
17
+ ## Model description
18
+
19
+ The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
20
+
21
+
22
+ ## Installation
23
+
24
+ ### Prerequisites
25
+
26
+ Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
27
+
28
+ ### Latest stable release
29
+
30
+ You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
31
+
32
+ ```shell
33
+ pip install pylocron
34
+ ```
35
+
36
+ or using [conda](https://anaconda.org/frgfm/pylocron):
37
+
38
+ ```shell
39
+ conda install -c frgfm pylocron
40
+ ```
41
+
42
+ ### Developer mode
43
+
44
+ Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
45
+
46
+ ```shell
47
+ git clone https://github.com/frgfm/Holocron.git
48
+ pip install -e Holocron/.
49
+ ```
50
+
51
+
52
+ ## Usage instructions
53
+
54
+ ```python
55
+ from PIL import Image
56
+ from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
57
+ from torchvision.transforms.functional import InterpolationMode
58
+ from holocron.models import model_from_hf_hub
59
+
60
+ model = model_from_hf_hub("frgfm/rexnet1_5x").eval()
61
+
62
+ img = Image.open(path_to_an_image).convert("RGB")
63
+
64
+ # Preprocessing
65
+ config = model.default_cfg
66
+ transform = Compose([
67
+ Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
68
+ PILToTensor(),
69
+ ConvertImageDtype(torch.float32),
70
+ Normalize(config['mean'], config['std'])
71
+ ])
72
+
73
+ input_tensor = transform(img).unsqueeze(0)
74
+
75
+ # Inference
76
+ with torch.inference_mode():
77
+ output = model(input_tensor)
78
+ probs = output.squeeze(0).softmax(dim=0)
79
+ ```
80
+
81
+
82
+ ## Citation
83
+
84
+ Original paper
85
+
86
+ ```bibtex
87
+ @article{DBLP:journals/corr/abs-2007-00992,
88
+ author = {Dongyoon Han and
89
+ Sangdoo Yun and
90
+ Byeongho Heo and
91
+ Young Joon Yoo},
92
+ title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural
93
+ Network},
94
+ journal = {CoRR},
95
+ volume = {abs/2007.00992},
96
+ year = {2020},
97
+ url = {https://arxiv.org/abs/2007.00992},
98
+ eprinttype = {arXiv},
99
+ eprint = {2007.00992},
100
+ timestamp = {Mon, 06 Jul 2020 15:26:01 +0200},
101
+ biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib},
102
+ bibsource = {dblp computer science bibliography, https://dblp.org}
103
+ }
104
+ ```
105
+
106
+ Source of this implementation
107
+
108
+ ```bibtex
109
+ @software{Fernandez_Holocron_2020,
110
+ author = {Fernandez, François-Guillaume},
111
+ month = {5},
112
+ title = {{Holocron}},
113
+ url = {https://github.com/frgfm/Holocron},
114
+ year = {2020}
115
+ }
116
+ ```