Commit
9c2b1d0
1 Parent(s): 6964520

Push model using huggingface_hub.

Browse files
Files changed (3) hide show
  1. README.md +6 -36
  2. config.json +49 -0
  3. pytorch_model.bin +3 -0
README.md CHANGED
@@ -1,39 +1,9 @@
1
  ---
2
- license: mit
 
 
3
  ---
4
- # RDT-1B
5
-
6
- RDT-1B is a 1B-parameter imitation learning Diffusion Transformer pre-trained on 1M+ multi-robot episodes. Given a language instruction and 3-view RGB image observations, RDT can predict the next
7
- 64 robot actions. RDT is inherently compatible with almost all kinds of modern mobile manipulators, from single-arm to dual-arm, joint to EEF, pos. to vel., and even with a mobile chassis.
8
-
9
- All the code and model weights are licensed under MIT license.
10
-
11
- Please refer to our [project page](), [github repository]() and [paper]() for more information.
12
-
13
- ## Model Details
14
-
15
- - **Developed by** Thu-ml team
16
- - **License:** MIT
17
- - **Pretrain dataset:** [More Information Needed]
18
- - **Finetune dataset:** [More Information Needed]
19
-
20
- - **Repository:** [More Information Needed]
21
- - **Paper :** [More Information Needed]
22
- - **Project Page:** https://rdt-robotics.github.io/rdt-robotics/
23
-
24
- ## Uses
25
-
26
- RDT-1B supports finetuning and pre-training on custom dataset, as well as deploying and inferencing on real-robots.
27
-
28
- Please refer to [our repository](https://github.com/GeneralEmbodiedSystem/RoboticsDiffusionTransformer/blob/main/docs/pretrain.md) for all the above guides.
29
-
30
-
31
- ## Citation
32
-
33
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
34
-
35
- **BibTeX:**
36
-
37
- [More Information Needed]
38
-
39
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - pytorch_model_hub_mixin
4
+ - model_hub_mixin
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
+ This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
8
+ - Library: https://huggingface.co/robotics-diffusion-transformer/rdt-1b
9
+ - Docs: [More Information Needed]
config.json ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "action_dim": 128,
3
+ "ema": {
4
+ "inv_gamma": 1.0,
5
+ "max_value": 0.9999,
6
+ "min_value": 0.0,
7
+ "power": 0.75,
8
+ "update_after_step": 0
9
+ },
10
+ "img_adaptor": "mlp2x_gelu",
11
+ "img_cond_len": 4374,
12
+ "img_pos_embed_config": [
13
+ [
14
+ "image",
15
+ [
16
+ 2,
17
+ 3,
18
+ -729
19
+ ]
20
+ ]
21
+ ],
22
+ "img_token_dim": 1152,
23
+ "lang_adaptor": "mlp2x_gelu",
24
+ "lang_pos_embed_config": [
25
+ [
26
+ "lang",
27
+ -1024
28
+ ]
29
+ ],
30
+ "lang_token_dim": 4096,
31
+ "max_lang_cond_len": 1024,
32
+ "noise_scheduler": {
33
+ "beta_schedule": "squaredcos_cap_v2",
34
+ "clip_sample": false,
35
+ "num_inference_timesteps": 5,
36
+ "num_train_timesteps": 1000,
37
+ "prediction_type": "sample",
38
+ "type": "ddpm"
39
+ },
40
+ "pred_horizon": 64,
41
+ "rdt": {
42
+ "cond_pos_embed_type": "multimodal",
43
+ "depth": 28,
44
+ "hidden_size": 2048,
45
+ "num_heads": 32
46
+ },
47
+ "state_adaptor": "mlp3x_gelu",
48
+ "state_token_dim": 128
49
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5107b2ebbaa4edadb26cf40c4d1990a16e0f67d6137925523eb90cf4b8fdaca
3
+ size 2456755578