moojink commited on
Commit
962318c
1 Parent(s): f2f2f7c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -40
README.md CHANGED
@@ -1,40 +1,50 @@
1
- ---
2
- library_name: transformers
3
- tags:
4
- - robotics
5
- - vla
6
- - image-text-to-text
7
- - multimodal
8
- - pretraining
9
- license: mit
10
- language:
11
- - en
12
- pipeline_tag: image-text-to-text
13
- ---
14
-
15
- # OpenVLA 7B Fine-Tuned on LIBERO-Spatial
16
-
17
- This model was produced by fine-tuning the [OpenVLA 7B model](https://huggingface.co/openvla/openvla-7b) via
18
- LoRA (r=32) on the LIBERO-Spatial dataset from the [LIBERO simulation benchmark](https://libero-project.github.io/main.html).
19
- We made a few modifications to the training dataset to improve final performance (see the
20
- [OpenVLA paper](https://arxiv.org/abs/2406.09246) for details).
21
- We fine-tuned OpenVLA with batch size 128 for 50K gradient steps using 8 A100 GPUs. We applied random crop and color jitter
22
- image augmentations during training (therefore, center cropping should be applied at inference time).
23
-
24
- ## Usage Instructions
25
-
26
- See the [OpenVLA GitHub README](https://github.com/openvla/openvla/blob/main/README.md) for instructions on how to
27
- run and evaluate this model in the LIBERO simulator.
28
-
29
- ## Citation
30
-
31
- **BibTeX:**
32
-
33
- ```bibtex
34
- @article{kim24openvla,
35
- title={OpenVLA: An Open-Source Vision-Language-Action Model},
36
- author={{Moo Jin} Kim and Karl Pertsch and Siddharth Karamcheti and Ted Xiao and Ashwin Balakrishna and Suraj Nair and Rafael Rafailov and Ethan Foster and Grace Lam and Pannag Sanketi and Quan Vuong and Thomas Kollar and Benjamin Burchfiel and Russ Tedrake and Dorsa Sadigh and Sergey Levine and Percy Liang and Chelsea Finn},
37
- journal = {arXiv preprint arXiv:2406.09246},
38
- year={2024}
39
- }
40
- ```
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - robotics
5
+ - vla
6
+ - image-text-to-text
7
+ - multimodal
8
+ - pretraining
9
+ license: mit
10
+ language:
11
+ - en
12
+ pipeline_tag: image-text-to-text
13
+ ---
14
+
15
+ # OpenVLA 7B Fine-Tuned on LIBERO-Spatial
16
+
17
+ This model was produced by fine-tuning the [OpenVLA 7B model](https://huggingface.co/openvla/openvla-7b) via
18
+ LoRA (r=32) on the LIBERO-Spatial dataset from the [LIBERO simulation benchmark](https://libero-project.github.io/main.html).
19
+ We made a few modifications to the training dataset to improve final performance (see the
20
+ [OpenVLA paper](https://arxiv.org/abs/2406.09246) for details).
21
+
22
+ Below are the hyperparameters we used for all LIBERO experiments:
23
+
24
+ - Hardware: 8 x A100 GPUs with 80GB memory
25
+ - Fine-tuned with LoRA: `use_lora == True`, `lora_rank == 32`, `lora_dropout == 0.0`
26
+ - Learning rate: 5e-4
27
+ - Batch size: 128 (8 GPUs x 16 samples each)
28
+ - Number of training gradient steps: 50K
29
+ - No quantization at train or test time
30
+ - No gradient accumulation (i.e. `grad_accumulation_steps == 1`)
31
+ - `shuffle_buffer_size == 100_000`
32
+ - Image augmentations: Random crop, color jitter (see training code for details)
33
+
34
+ ## Usage Instructions
35
+
36
+ See the [OpenVLA GitHub README](https://github.com/openvla/openvla/blob/main/README.md) for instructions on how to
37
+ run and evaluate this model in the LIBERO simulator.
38
+
39
+ ## Citation
40
+
41
+ **BibTeX:**
42
+
43
+ ```bibtex
44
+ @article{kim24openvla,
45
+ title={OpenVLA: An Open-Source Vision-Language-Action Model},
46
+ author={{Moo Jin} Kim and Karl Pertsch and Siddharth Karamcheti and Ted Xiao and Ashwin Balakrishna and Suraj Nair and Rafael Rafailov and Ethan Foster and Grace Lam and Pannag Sanketi and Quan Vuong and Thomas Kollar and Benjamin Burchfiel and Russ Tedrake and Dorsa Sadigh and Sergey Levine and Percy Liang and Chelsea Finn},
47
+ journal = {arXiv preprint arXiv:2406.09246},
48
+ year={2024}
49
+ }
50
+ ```