gheinrich commited on
Commit
3c03bba
1 Parent(s): 30e194d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -0
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AM-RADIO: Reduce All Domains Into One
2
+
3
+ Mike Ranzinger, Greg Heinrich, Jan Kautz, Pavlo Molchanov
4
+
5
+ [NVIDIA Research](https://www.nvidia.com/en-us/research/)
6
+
7
+ \[[Paper](https://arxiv.org/abs/2312.06709)\]\[[BibTex](#citing-radio)\]
8
+
9
+ ## Pretrained Models
10
+
11
+ Refer to `model_results.csv` for model versions and their metrics.
12
+
13
+ ### HuggingFace Hub
14
+
15
+ In order to pull the model from HuggingFace, you need to be logged in:
16
+
17
+ ```Bash
18
+ huggingface-cli login
19
+ ```
20
+
21
+ Then you can pull the model from a Python script:
22
+
23
+ ```Python
24
+ from transformers import AutoModel
25
+ model = AutoModel.from_pretrained("nvidia/RADIO", trust_remote_code=True)
26
+ ```
27
+
28
+ Alternatively, you can specify an access token:
29
+
30
+ ```Python
31
+ access_token = "<YOUR ACCESS TOKEN"
32
+ model = AutoModel.from_pretrained("nvidia/RADIO", trust_remote_code=True, token=access_token)
33
+ ```
34
+
35
+ ### Usage
36
+
37
+ RADIO will return a tuple with two tensors. The `summary` is similar to the `cls_token` in ViT and is meant to represent the general concept of the entire image. It has shape $(B,C)$ with $B$ being the batch dimension, and $C$ being some number of channels. The `spatial_features` represent more localized content which should be suitable for dense tasks such as semantic segmentation, or for integration into an LLM. It has shape $(B,T,D)$ with $T$ being the flattened spatial tokens, and $D$ being the channels for spatial features. Note that $C \neq D$ in general.
38
+
39
+ Converting to a spatial tensor format can be done using the downsampling size of the model, combined with the input tensor shape. For 'radio_v1', the patch size is 14.
40
+ ```Python
41
+ from einops import rearrange
42
+ spatial_features = rearrange(spatial_features, 'b (h w) d -> b d h w', h=x.shape[-2] // patch_size, w=x.shape[-1] // patch_size)
43
+ ```
44
+
45
+ The resulting tensor will have shape $(B,D,H,W)$, as is typically seen with computer vision models.
46
+
47
+ ### RADIOv1 Notes
48
+
49
+ We have trained this model to be flexible in input dimension. It supports inputs with both width and height in the range $[14, 1008]$ as long as both axes are divisible by 14. We have found that summarization tokens work best at $H=W=378$ (although the range $[192, 448]$ works well). For spatial tasks, we used $H=W=518$ to perform linear probing for semantic segmentation, and may perform better for more high-resolution tasks. Going up to $1008$, the model may need additional fine tuning at that resolution for best results.
50
+
51
+ It is not required that $H=W$ although we have not specifically trained or testing the model in this setting.
52
+
53
+
54
+ ## Training
55
+
56
+ _Coming Soon_
57
+
58
+ ## License
59
+
60
+ RADIO code and weights are released under the [NSCLv1 License](LICENSE).
61
+
62
+ ## Citing RADIO
63
+
64
+ If you find this repository useful, please consider giving a star and citation:
65
+ ```
66
+ @misc{ranzinger2023amradio,
67
+ title={AM-RADIO: Agglomerative Model -- Reduce All Domains Into One},
68
+ author={Mike Ranzinger and Greg Heinrich and Jan Kautz and Pavlo Molchanov},
69
+ year={2023},
70
+ eprint={2312.06709},
71
+ archivePrefix={arXiv},
72
+ primaryClass={cs.CV}
73
+ }
74
+ ```