bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
240
646
abstract
stringlengths
653
2.03k
title
stringlengths
25
127
authors
sequencelengths
2
22
id
stringclasses
1 value
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
35 values
n_linked_authors
int64
-1
7
upvotes
int64
-1
45
num_comments
int64
-1
3
n_authors
int64
-1
22
Models
sequencelengths
0
6
Datasets
sequencelengths
0
2
Spaces
sequencelengths
0
0
old_Models
sequencelengths
0
6
old_Datasets
sequencelengths
0
2
old_Spaces
sequencelengths
0
0
paper_page_exists_pre_conf
int64
0
1
project_page
stringlengths
0
89
null
https://openreview.net/forum?id=adf3pO9baG
@inproceedings{ decastro2024dreaming, title={Dreaming to Assist: Learning to Align with Human Objectives for Shared Control in High-Speed Racing}, author={Jonathan DeCastro and Andrew Silva and Deepak Gopinath and Emily Sumner and Thomas Matrai Balch and Laporsha Dees and Guy Rosman}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=adf3pO9baG} }
Tight coordination is required for effective human-robot teams in domains involving fast dynamics and tactical decisions, such as multi-car racing. In such settings, robot teammates must react to cues of a human teammate's tactical objective to assist in a way that is consistent with the objective (e.g., navigating left or right around an obstacle). To address this challenge, we present _Dream2Assist_, a framework that combines a rich world model able to infer human objectives and value functions, and an assistive agent that provides appropriate expert assistance to a given human teammate. Our approach builds on a recurrent state space model to explicitly infer human intents, enabling the assistive agent to select actions that align with the human and enabling a fluid teaming interaction. We demonstrate our approach in a high-speed racing domain with a population of synthetic human drivers pursuing mutually exclusive objectives, such as "stay-behind" and "overtake". We show that the combined human-robot team, when blending its actions with those of the human, outperforms synthetic humans alone and several baseline assistance strategies, and that intent-conditioning enables adherence to human preferences during task execution, leading to improved performance while satisfying the human's objective.
Dreaming to Assist: Learning to Align with Human Objectives for Shared Control in High-Speed Racing
[ "Jonathan DeCastro", "Andrew Silva", "Deepak Gopinath", "Emily Sumner", "Thomas Matrai Balch", "Laporsha Dees", "Guy Rosman" ]
Conference
Poster
2410.10062
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://dream2assist.github.io/
null
https://openreview.net/forum?id=aaY5fVFMVf
@inproceedings{ doula2024conformal, title={Conformal Prediction for Semantically-Aware Autonomous Perception in Urban Environments}, author={Achref Doula and Tobias G{\"u}delh{\"o}fer and Max M{\"u}hlh{\"a}user and Alejandro Sanchez Guinea}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=aaY5fVFMVf} }
We introduce Knowledge-Refined Prediction Sets (KRPS), a novel approach that performs semantically-aware uncertainty quantification for multitask-based autonomous perception in urban environments. KRPS extends conformal prediction (CP) to ensure 2 properties not typically addressed by CP frameworks: semantic label consistency and true label coverage, across multiple perception tasks. We elucidate the capability of KRPS through high-level classification tasks crucial for semantically-aware autonomous perception in urban environments, including agent classification, agent location classification, and agent action classification. In a theoretical analysis, we introduce the concept of semantic label consistency among tasks and prove the semantic consistency and marginal coverage properties of the produced sets by KRPS. The results of our evaluation on the ROAD dataset and the Waymo/ROAD++ dataset show that KRPS outperforms state-of-the-art CP methods in reducing uncertainty by up to 80\% and increasing the semantic consistency by up to 30\%, while maintaining the coverage guarantees.
Conformal Prediction for Semantically-Aware Autonomous Perception in Urban Environments
[ "Achref Doula", "Tobias Güdelhöfer", "Max Mühlhäuser", "Alejandro Sanchez Guinea" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ZdgaF8fOc0
@inproceedings{ kicki2024bridging, title={Bridging the gap between Learning-to-plan, Motion Primitives and Safe Reinforcement Learning}, author={Piotr Kicki and Davide Tateo and Puze Liu and Jonas G{\"u}nster and Jan Peters and Krzysztof Walas}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=ZdgaF8fOc0} }
Trajectory planning under kinodynamic constraints is fundamental for advanced robotics applications that require dexterous, reactive, and rapid skills in complex environments. These constraints, which may represent task, safety, or actuator limitations, are essential for ensuring the proper functioning of robotic platforms and preventing unexpected behaviors. Recent advances in kinodynamic planning demonstrate that learning-to-plan techniques can generate complex and reactive motions under intricate constraints. However, these techniques necessitate the analytical modeling of both the robot and the entire task, a limiting assumption when systems are extremely complex or when constructing accurate task models is prohibitive. This paper addresses this limitation by combining learning-to-plan methods with reinforcement learning, resulting in a novel integration of black-box learning of motion primitives and optimization. We evaluate our approach against state-of-the-art safe reinforcement learning methods, showing that our technique, particularly when exploiting task structure, outperforms baseline methods in challenging scenarios such as planning to hit in robot air hockey. This work demonstrates the potential of our integrated approach to enhance the performance and safety of robots operating under complex kinodynamic constraints.
Bridging the gap between Learning-to-plan, Motion Primitives and Safe Reinforcement Learning
[ "Piotr Kicki", "Davide Tateo", "Puze Liu", "Jonas Günster", "Jan Peters", "Krzysztof Walas" ]
Conference
Poster
2408.14063
[ "https://github.com/pkicki/spline_rl/" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://pkicki.github.io/CNP3O/
null
https://openreview.net/forum?id=ZMnD6QZAE6
@inproceedings{ kim2024openvla, title={Open{VLA}: An Open-Source Vision-Language-Action Model}, author={Moo Jin Kim and Karl Pertsch and Siddharth Karamcheti and Ted Xiao and Ashwin Balakrishna and Suraj Nair and Rafael Rafailov and Ethan P Foster and Pannag R Sanketi and Quan Vuong and Thomas Kollar and Benjamin Burchfiel and Russ Tedrake and Dorsa Sadigh and Sergey Levine and Percy Liang and Chelsea Finn}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=ZMnD6QZAE6} }
Large policies pretrained on a combination of Internet-scale vision-language data and diverse robot demonstrations have the potential to change how we teach robots new skills: rather than training new behaviors from scratch, we can fine-tune such vision-language-action (VLA) models to obtain robust, generalizable policies for visuomotor control. Yet, widespread adoption of VLAs for robotics has been challenging as 1) existing VLAs are largely closed and inaccessible to the public, and 2) prior work fails to explore methods for efficiently fine-tuning VLAs for new tasks, a key component for adoption. Addressing these challenges, we introduce OpenVLA, a 7B-parameter open-source VLA trained on a diverse collection of 970k real-world robot demonstrations. OpenVLA builds on a Llama 2 language model combined with a visual encoder that fuses pretrained features from DINOv2 and SigLIP. As a product of the added data diversity and new model components, OpenVLA demonstrates strong results for generalist manipulation, outperforming closed models such as RT-2-X (55B) by 16.5\% in absolute task success rate across 29 tasks and multiple robot embodiments, with 7x fewer parameters. We further show that we can effectively fine-tune OpenVLA for new settings, with especially strong generalization results in multi-task environments involving multiple objects and strong language grounding abilities, where we outperform expressive from-scratch imitation learning methods such as Diffusion Policy by 20.4\%. We also explore compute efficiency; as a separate contribution, we show that OpenVLA can be fine-tuned on consumer GPUs via modern low-rank adaptation methods and served efficiently via quantization without a hit to downstream success rate. Finally, we release model checkpoints, fine-tuning notebooks, and our PyTorch codebase with built-in support for training VLAs at scale on Open X-Embodiment datasets.
OpenVLA: An Open-Source Vision-Language-Action Model
[ "Moo Jin Kim", "Karl Pertsch", "Siddharth Karamcheti", "Ted Xiao", "Ashwin Balakrishna", "Suraj Nair", "Rafael Rafailov", "Ethan P Foster", "Pannag R Sanketi", "Quan Vuong", "Thomas Kollar", "Benjamin Burchfiel", "Russ Tedrake", "Dorsa Sadigh", "Sergey Levine", "Percy Liang", "Chelsea Finn" ]
Conference
Poster
2406.09246
[ "https://github.com/openvla/openvla" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://openvla.github.io/
null
https://openreview.net/forum?id=Yw5QGNBkEN
@inproceedings{ zhang2024scaling, title={Scaling Manipulation Learning with Visual Kinematic Chain Prediction}, author={Xinyu Zhang and Yuhan Liu and Haonan Chang and Abdeslam Boularias}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=Yw5QGNBkEN} }
Learning general-purpose models from diverse datasets has achieved great success in machine learning. In robotics, however, existing methods in multi-task learning are typically constrained to a single robot and workspace, while recent work such as RT-X requires a non-trivial action normalization procedure to manually bridge the gap between different action spaces in diverse environments. In this paper, we propose the visual kinematics chain as a precise and universal representation of quasi-static actions for robot learning over diverse environments, which requires no manual adjustment since the visual kinematic chains can be automatically obtained from the robot’s model and camera parameters. We propose the Visual Kinematics Transformer (VKT), a convolution-free architecture that supports an arbitrary number of camera viewpoints, and that is trained with a single objective of forecasting kinematic structures through optimal point-set matching. We demonstrate the superior performance of VKT over BC transformers as a general agent on Calvin, RLBench, ALOHA, Open-X, and real robot manipulation tasks. Video demonstrations and source code can be found at https://mlzxy.github.io/visual-kinetic-chain.
Scaling Manipulation Learning with Visual Kinematic Chain Prediction
[ "Xinyu Zhang", "Yuhan Liu", "Haonan Chang", "Abdeslam Boularias" ]
Conference
Poster
2406.07837
[ "https://github.com/mlzxy/visual-kinetic-chain" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://mlzxy.github.io/visual-kinetic-chain/
null
https://openreview.net/forum?id=Yce2jeILGt
@inproceedings{ cheng2024opentelevision, title={Open-TeleVision: Teleoperation with Immersive Active Visual Feedback}, author={Xuxin Cheng and Jialong Li and Shiqi Yang and Ge Yang and Xiaolong Wang}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=Yce2jeILGt} }
Teleoperation serves as a powerful method for collecting on-robot data essential for robot learning from demonstrations. The intuitiveness and ease of use of the teleoperation system are crucial for ensuring high-quality, diverse, and scalable data. To achieve this, we propose an immersive teleoperation system $\textbf{Open-TeleVision}$ that allows operators to actively perceive the robot's surroundings in a stereoscopic manner. Additionally, the system mirrors the operator's arm and hand movements on the robot, creating an immersive experience as if the operator's mind is transmitted to a robot embodiment. We validate the effectiveness of our system by collecting data and training imitation learning policies on four long-horizon, precise tasks (can sorting, can insertion, folding, and unloading) for 2 different humanoid robots and deploy them in the real world. The entire system will be open-sourced.
Open-TeleVision: Teleoperation with Immersive Active Visual Feedback
[ "Xuxin Cheng", "Jialong Li", "Shiqi Yang", "Ge Yang", "Xiaolong Wang" ]
Conference
Poster
2407.01512
[ "https://github.com/OpenTeleVision/TeleVision" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://robot-tv.github.io
null
https://openreview.net/forum?id=YOFrRTDC6d
@inproceedings{ garrett2024skillgen, title={SkillGen: Automated Demonstration Generation for Efficient Skill Learning and Deployment}, author={Caelan Reed Garrett and Ajay Mandlekar and Bowen Wen and Dieter Fox}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=YOFrRTDC6d} }
Imitation learning from human demonstrations is an effective paradigm for robot manipulation, but acquiring large datasets is costly and resource-intensive, especially for long-horizon tasks. To address this issue, we propose SkillGen, an automated system for generating demonstration datasets from a few human demos. SkillGen segments human demos into manipulation skills, adapts these skills to new contexts, and stitches them together through free-space transit and transfer motion. We also propose a Hybrid Skill Policy (HSP) framework for learning skill initiation, control, and termination components from SkillGen datasets, enabling skills to be sequenced using motion planning at test-time. We demonstrate that SkillGen greatly improves data generation and policy learning performance over a state-of-the-art data generation framework, resulting in the capability produce data for large scene variations, including clutter, and agents that are on average 24% more successful. We demonstrate the efficacy of SkillGen by generating over 24K demonstrations across 18 task variants in simulation from just 60 human demonstrations, and training proficient, often near-perfect, HSP agents. Finally, we apply SkillGen to 3 real-world manipulation tasks and demonstrate zero-shot sim-to-real transfer on a long-horizon assembly task. Videos, and more at https://skillgen.github.io.
SkillMimicGen: Automated Demonstration Generation for Efficient Skill Learning and Deployment
[ "Caelan Reed Garrett", "Ajay Mandlekar", "Bowen Wen", "Dieter Fox" ]
Conference
Poster
2410.18907
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://skillgen.github.io/
null
https://openreview.net/forum?id=XrxLGzF0lJ
@inproceedings{ mirchandani2024so, title={So You Think You Can Scale Up Autonomous Robot Data Collection?}, author={Suvir Mirchandani and Suneel Belkhale and Joey Hejna and Evelyn Choi and Md Sazzad Islam and Dorsa Sadigh}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=XrxLGzF0lJ} }
A long-standing goal in robot learning is to develop methods for robots to acquire new skills autonomously. While reinforcement learning (RL) comes with the promise of enabling autonomous data collection, it remains challenging to scale in the real-world partly due to the significant effort required for environment design and instrumentation, including the need for designing reset functions or accurate success detectors. On the other hand, imitation learning (IL) methods require little to no environment design effort, but instead require significant human supervision in the form of collected demonstrations. To address these shortcomings, recent works in autonomous IL start with an initial seed dataset of human demonstrations that an autonomous policy can bootstrap from. While autonomous IL approaches come with the promise of addressing the challenges of autonomous RL—environment design challenges—as well as the challenges of pure IL strategies—extensive human supervision—in this work, we posit that such techniques do not deliver on this promise and are still unable to scale up autonomous data collection in the real world. Through a series of targeted real-world experiments, we demonstrate that these approaches, when scaled up to realistic settings, face much of the same scaling challenges as prior attempts in RL in terms of environment design. Further, we perform a rigorous study of various autonomous IL methods across different data scales and 7 simulation and real-world tasks, and demonstrate that while autonomous data collection can modestly improve performance (on the order of 10%), simply collecting more human data often provides significantly more improvement. Our work suggests a negative result: that scaling up autonomous data collection for learning robot policies for real-world tasks is more challenging and impractical than what is suggested in prior work. We hope these insights about the core challenges of scaling up data collection help inform future efforts in autonomous learning.
So You Think You Can Scale Up Autonomous Robot Data Collection?
[ "Suvir Mirchandani", "Suneel Belkhale", "Joey Hejna", "Evelyn Choi", "Md Sazzad Islam", "Dorsa Sadigh" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://autonomous-data-collection.github.io/
null
https://openreview.net/forum?id=XopATjibyz
@inproceedings{ song2024learning, title={Learning Quadruped Locomotion Using Differentiable Simulation}, author={Yunlong Song and Sang bae Kim and Davide Scaramuzza}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=XopATjibyz} }
This work explores the potential of using differentiable simulation for learning robot control. Differentiable simulation promises fast convergence and stable training by computing low-variance first-order gradients using the robot model. Still, so far, its usage for legged robots is limited to simulation. The main challenge lies in the complex optimization landscape of robotic tasks due to discontinuous dynamics. This work proposes a new differentiable simulation framework to overcome these challenges. The key idea involves decoupling the complex whole-body simulation, which may exhibit discontinuities due to contact into two separate continuous domains. Subsequently, we align the robot state resulting from the simplified model with a more precise, non-differentiable simulator to maintain sufficient simulation accuracy. Our framework enables learning quadruped walking in simulation in minutes without parallelization. When augmented with GPU parallelization, our approach allows the quadruped robot to master diverse locomotion skills on challenging terrains in minutes. We demonstrate that differentiable simulation outperforms a reinforcement learning algorithm (PPO) by achieving significantly better sample efficiency while maintaining its effectiveness in handling large-scale environments. Our policy achieves robust locomotion performance in the real world zero-shot.
Learning Quadruped Locomotion Using Differentiable Simulation
[ "Yunlong Song", "Sang bae Kim", "Davide Scaramuzza" ]
Conference
Oral
2403.14864
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=X3OfR3axX4
@inproceedings{ gao2024multitransmotion, title={Multi-Transmotion: Pre-trained Model for Human Motion Prediction}, author={Yang Gao and Po-Chien Luan and Alexandre Alahi}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=X3OfR3axX4} }
The ability of intelligent systems to predict human behaviors is essential, particularly in fields such as autonomous vehicle navigation and social robotics. However, the intricacies of human motion have precluded the development of a standardized dataset and model for human motion prediction, thereby hindering the establishment of pre-trained models. In this paper, we address these limitations by integrating multiple datasets, encompassing both trajectory and 3D pose keypoints, to further propose a pre-trained model for human motion prediction. We merge seven distinct datasets across varying modalities and standardize their formats. To facilitate multimodal pre-training, we introduce Multi-Transmotion, an innovative transformer-based model capable of cross-modality pre-training. Additionally, we devise a novel masking strategy to learn rich representations. Our methodology demonstrates competitive performance across various datasets on several downstream tasks, including trajectory prediction in the NBA and JTA datasets, as well as pose prediction in the AMASS and 3DPW datasets. The code will be made available upon publication.
Multi-Transmotion: Pre-trained Model for Human Motion Prediction
[ "Yang Gao", "Po-Chien Luan", "Alexandre Alahi" ]
Conference
Poster
[ "https://github.com/vita-epfl/multi-transmotion" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=WnSl42M9Z4
@inproceedings{ fu2024humanplus, title={HumanPlus: Humanoid Shadowing and Imitation from Humans}, author={Zipeng Fu and Qingqing Zhao and Qi Wu and Gordon Wetzstein and Chelsea Finn}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=WnSl42M9Z4} }
One of the key arguments for building robots that have similar form factors to human beings is that we can leverage the massive human data for training.Yet, doing so has remained challenging in practice due to the complexities in humanoid perception and control, lingering physical gaps between humanoids and humans in morphologies and actuation, and lack of a data pipeline for humanoids to learn autonomous skills from egocentric vision. In this paper, we introduce a full-stack system for humanoids to learn motion and autonomous skills from human data. We first train a low-level policy in simulation via reinforcement learning using existing 40-hour human motion datasets. This policy transfers to the real world and allows humanoid robots to follow human body and hand motion in real time using only a RGB camera, i.e. shadowing. Through shadowing, human operators can teleoperate humanoids to collect whole-body data for learning different tasks in the real world. Using the data collected, we then perform supervised behavior cloning to train skill policies using egocentric vision, allowing humanoids to complete different tasks autonomously by imitating human skills. We demonstrate the system on our customized 33-DoF 180cm humanoid, autonomously completing tasks such as wearing a shoe to stand up and walk, folding a sweatshirt, rearranging objects, typing, and greeting another robot with 60-100% success rates using up to 40 demonstrations.
HumanPlus: Humanoid Shadowing and Imitation from Humans
[ "Zipeng Fu", "Qingqing Zhao", "Qi Wu", "Gordon Wetzstein", "Chelsea Finn" ]
Conference
Poster
2406.10454
[ "https://github.com/MarkFzp/humanplus" ]
https://huggingface.co/papers/2406.10454
0
2
1
5
[ "HoyerChou/HumanPlus" ]
[]
[]
[ "HoyerChou/HumanPlus" ]
[]
[]
1
https://humanoid-ai.github.io/
null
https://openreview.net/forum?id=WmWbswjTsi
@inproceedings{ longhini2024clothsplatting, title={Cloth-Splatting: 3D State Estimation from {RGB} Supervision for Deformable Objects}, author={Alberta Longhini and Marcel B{\"u}sching and Bardienus Pieter Duisterhof and Jens Lundell and Jeffrey Ichnowski and M{\r{a}}rten Bj{\"o}rkman and Danica Kragic}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=WmWbswjTsi} }
We introduce Cloth-Splatting, a method for estimating 3D states of cloth from RGB images through a prediction-update framework. Cloth-Splatting leverages an action-conditioned dynamics model for predicting future states and uses 3D Gaussian Splatting to update the predicted states. Our key insight is that coupling a 3D mesh-based representation with Gaussian Splatting allows us to define a differentiable map between the cloth's state space and the image space. This enables the use of gradient-based optimization techniques to refine inaccurate state estimates using only RGB supervision. Our experiments demonstrate that Cloth-Splatting not only improves state estimation accuracy over current baselines but also reduces convergence time by $\sim 85$ \%.
Cloth-Splatting: 3D Cloth State Estimation from RGB Supervision
[ "Alberta Longhini", "Marcel Büsching", "Bardienus Pieter Duisterhof", "Jens Lundell", "Jeffrey Ichnowski", "Mårten Björkman", "Danica Kragic" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://kth-rpl.github.io/cloth-splatting/
null
https://openreview.net/forum?id=WjDR48cL3O
@inproceedings{ seo2024continuous, title={Continuous Control with Coarse-to-fine Reinforcement Learning}, author={Younggyo Seo and Jafar Uru{\c{c}} and Stephen James}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=WjDR48cL3O} }
Despite recent advances in improving the sample-efficiency of reinforcement learning (RL) algorithms, designing an RL algorithm that can be practically deployed in real-world environments remains a challenge. In this paper, we present Coarse-to-fine Reinforcement Learning (CRL), a framework that trains RL agents to zoom-into a continuous action space in a coarse-to-fine manner, enabling the use of stable, sample-efficient value-based RL algorithms for fine-grained continuous control tasks. Our key idea is to train agents that output actions by iterating the procedure of (i) discretizing the continuous action space into multiple intervals and (ii) selecting the interval with the highest Q-value to further discretize at the next level. We then introduce a concrete, value-based algorithm within the CRL framework called Coarse-to-fine Q-Network (CQN). Our experiments demonstrate that CQN significantly outperforms RL and behavior cloning baselines on 20 sparsely-rewarded RLBench manipulation tasks with a modest number of environment interactions and expert demonstrations. We also show that CQN robustly learns to solve real-world manipulation tasks within a few minutes of online training.
Continuous Control with Coarse-to-fine Reinforcement Learning
[ "Younggyo Seo", "Jafar Uruç", "Stephen James" ]
Conference
Poster
2407.07787
[ "https://github.com/younggyoseo/CQN" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://younggyo.me/cqn/
null
https://openreview.net/forum?id=WLOTZHmmO6
@inproceedings{ liu2024let, title={Let Occ Flow: Self-Supervised 3D Occupancy Flow Prediction}, author={Yili Liu and Linzhan Mou and Xuan Yu and Chenrui Han and Sitong Mao and Rong Xiong and Yue Wang}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=WLOTZHmmO6} }
Accurate perception of the dynamic environment is a fundamental task for autonomous driving and robot systems. This paper introduces Let Occ Flow, the first self-supervised work for joint 3D occupancy and occupancy flow prediction using only camera inputs, eliminating the need for 3D annotations. Utilizing TPV for unified scene representation and deformable attention layers for feature aggregation, our approach incorporates a novel attention-based temporal fusion module to capture dynamic object dependencies, followed by a 3D refine module for fine-gained volumetric representation. Besides, our method extends differentiable rendering to 3D volumetric flow fields, leveraging zero-shot 2D segmentation and optical flow cues for dynamic decomposition and motion optimization. Extensive experiments on nuScenes and KITTI datasets demonstrate the competitive performance of our approach over prior state-of-the-art methods.
Let Occ Flow: Self-Supervised 3D Occupancy Flow Prediction
[ "Yili Liu", "Linzhan Mou", "Xuan Yu", "Chenrui Han", "Sitong Mao", "Rong Xiong", "Yue Wang" ]
Conference
Poster
2407.07587
[ "https://github.com/eliliu2233/occ-flow" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://eliliu2233.github.io/letoccflow/
null
https://openreview.net/forum?id=VoC3wF6fbh
@inproceedings{ zhang2024learning, title={Learning to Open and Traverse Doors with a Legged Manipulator}, author={Mike Zhang and Yuntao Ma and Takahiro Miki and Marco Hutter}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=VoC3wF6fbh} }
Using doors is a longstanding challenge in robotics and is of significant practical interest in giving robots greater access to human-centric spaces. The task is challenging due to the need for online adaptation to varying door properties and precise control in manipulating the door panel and navigating through the confined doorway. To address this, we propose a learning-based controller for a legged manipulator to open and traverse through doors. The controller is trained using a teacher-student approach in simulation to learn robust task behaviors as well as estimate crucial door properties during the interaction. Unlike previous works, our approach is a single control policy that can handle both push and pull doors through learned behaviour which infers the opening direction during deployment without prior knowledge. The policy was deployed on the ANYmal legged robot with an arm and achieved a success rate of 95.0% in repeated trials conducted in an experimental setting. Additional experiments validate the policy's effectiveness and robustness to various doors and disturbances. A video overview of the method and experiments is provided in the supplementary material.
Learning to Open and Traverse Doors with a Legged Manipulator
[ "Mike Zhang", "Yuntao Ma", "Takahiro Miki", "Marco Hutter" ]
Conference
Poster
2409.04882
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=VdyIhsh1jU
@inproceedings{ wasserman2024legolas, title={Legolas: Deep Leg-Inertial Odometry}, author={Justin Wasserman and Ananye Agarwal and Rishabh Jangir and Girish Chowdhary and Deepak Pathak and Abhinav Gupta}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=VdyIhsh1jU} }
Estimating odometry, where an accumulating position and rotation is tracked, has critical applications in many areas of robotics as a form of state estimation such as in SLAM, navigation, and controls. During deployment of a legged robot, a vision system's tracking can easily get lost. Instead, using only the onboard leg and inertial sensor for odometry is a promising alternative. Previous methods in estimating leg-inertial odometry require analytical modeling or collecting high-quality real-world trajectories to train a model. Analytical modeling is specific to each robot, requires manual fine-tuning, and doesn't always capture real-world phenomena such as slippage. Previous work learning legged odometry still relies on collecting real-world data, this has been shown to not perform well out of distribution. In this work, we show that it is possible to estimate the odometry of a legged robot without any analytical modeling or real-world data collection. In this paper, we present Legolas, the first method that accurately estimates odometry in a purely data-driven fashion for quadruped robots. We deploy our method on two real-world quadruped robots in both indoor and outdoor environments. In the indoor scenes, our proposed method accomplishes a relative pose error that is 73% less than an analytical filtering-based approach and 87.5% less than a real-world behavioral cloning approach. More results are available at: learned-odom.github.io
Legolas: Deep Leg-Inertial Odometry
[ "Justin Wasserman", "Ananye Agarwal", "Rishabh Jangir", "Girish Chowdhary", "Deepak Pathak", "Abhinav Gupta" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://learned-odom.github.io/
null
https://openreview.net/forum?id=VUhlMfEekm
@inproceedings{ song2024implicit, title={Implicit Grasp Diffusion: Bridging the Gap between Dense Prediction and Sampling-based Grasping}, author={Pinhao Song and Pengteng Li and Renaud Detry}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=VUhlMfEekm} }
There are two dominant approaches in modern robot grasp planning: dense prediction and sampling-based methods. Dense prediction calculates viable grasps across the robot’s view but is limited to predicting one grasp per voxel. Sampling-based methods, on the other hand, encode multi-modal grasp distributions, allowing for different grasp approaches at a point. However, these methods rely on a global latent representation, which struggles to represent the entire field of view, resulting in coarse grasps. To address this, we introduce \emph{Implicit Grasp Diffusion} (IGD), which combines the strengths of both methods by using implicit neural representations to extract detailed local features and sampling grasps from diffusion models conditioned on these features. Evaluations on clutter removal tasks in both simulated and real-world environments show that IGD delivers high accuracy, noise resilience, and multi-modal grasp pose capabilities.
Implicit Grasp Diffusion: Bridging the Gap between Dense Prediction and Sampling-based Grasping
[ "Pinhao Song", "Pengteng Li", "Renaud Detry" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=VMqg1CeUQP
@inproceedings{ lan2024dexcatch, title={DexCatch: Learning to Catch Arbitrary Objects with Dexterous Hands}, author={Fengbo Lan and Shengjie Wang and Yunzhe Zhang and Haotian Xu and Oluwatosin OluwaPelumi Oseni and Ziye Zhang and Yang Gao and Tao Zhang}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=VMqg1CeUQP} }
Achieving human-like dexterous manipulation remains a crucial area of research in robotics. Current research focuses on improving the success rate of pick-and-place tasks. Compared with pick-and-place, throwing-catching behavior has the potential to increase the speed of transporting objects to their destination. However, dynamic dexterous manipulation poses a major challenge for stable control due to a large number of dynamic contacts. In this paper, we propose a Learning-based framework for Throwing-Catching tasks using dexterous hands (LTC). Our method, LTC, achieves a 73% success rate across 45 scenarios (diverse hand poses and objects), and the learned policies demonstrate strong zero-shot transfer performance on unseen objects. Additionally, in tasks where the object in hand faces sideways, an extremely unstable scenario due to the lack of support from the palm, all baselines fail, while our method still achieves a success rate of over 60%.
DexCatch: Learning to Catch Arbitrary Objects with Dexterous Hands
[ "Fengbo Lan", "Shengjie Wang", "Yunzhe Zhang", "Haotian Xu", "Oluwatosin OluwaPelumi Oseni", "Ziye Zhang", "Yang Gao", "Tao Zhang" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=VFs1vbQnYN
@inproceedings{ wang2024simtoreal, title={Sim-to-Real Transfer via 3D Feature Fields for Vision-and-Language Navigation}, author={Zihan Wang and Xiangyang Li and Jiahao Yang and Yeqi Liu and Shuqiang Jiang}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=VFs1vbQnYN} }
Vision-and-language navigation (VLN) enables the agent to navigate to a remote location in 3D environments following the natural language instruction. In this field, the agent is usually trained and evaluated in the navigation simulators, lacking effective approaches for sim-to-real transfer. The VLN agents with only a monocular camera exhibit extremely limited performance, while the mainstream VLN models trained with panoramic observation, perform better but are difficult to deploy on most monocular robots. For this case, we propose a sim-to-real transfer approach to endow the monocular robots with panoramic traversability perception and panoramic semantic understanding, thus smoothly transferring the high-performance panoramic VLN models to the common monocular robots. In this work, the semantic traversable map is proposed to predict agent-centric navigable waypoints, and the novel view representations of these navigable waypoints are predicted through the 3D feature fields. These methods broaden the limited field of view of the monocular robots and significantly improve navigation performance in the real world. Our VLN system outperforms previous SOTA monocular VLN methods in R2R-CE and RxR-CE benchmarks within the simulation environments and is also validated in real-world environments, providing a practical and high-performance solution for real-world VLN.
Sim-to-Real Transfer via 3D Feature Fields for Vision-and-Language Navigation
[ "Zihan Wang", "Xiangyang Li", "Jiahao Yang", "Yeqi Liu", "Shuqiang Jiang" ]
Conference
Poster
2406.09798
[ "https://github.com/MrZihan/Sim2Real-VLN-3DFF" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=V5x0m6XDSV
@inproceedings{ chen2024differentiable, title={Differentiable Discrete Elastic Rods for Real-Time Modeling of Deformable Linear Objects}, author={Yizhou Chen and Yiting Zhang and Zachary Brei and Tiancheng Zhang and Yuzhen Chen and Julie Wu and Ram Vasudevan}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=V5x0m6XDSV} }
This paper addresses the task of modeling Deformable Linear Objects (DLOs), such as ropes and cables, during dynamic motion over long time horizons. This task presents significant challenges due to the complex dynamics of DLOs. To address these challenges, this paper proposes differentiable Discrete Elastic Rods For deformable linear Objects with Real-time Modeling (DEFORM), a novel framework that combines a differentiable physics-based model with a learning framework to model DLOs accurately and in real-time. The performance of DEFORM is evaluated in an experimental setup involving two industrial robots and a variety of sensors. A comprehensive series of experiments demonstrate the efficacy of DEFORM in terms of accuracy, computational speed, and generalizability when compared to state-of-the-art alternatives. To further demonstrate the utility of DEFORM, this paper integrates it into a perception pipeline and illustrates its superior performance when compared to the state-of-the-art methods while tracking a DLO even in the presence of occlusions. Finally, this paper illustrates the superior performance of DEFORM when compared to state-of-the-art methods when it is applied to perform autonomous planning and control of DLOs.
Differentiable Discrete Elastic Rods for Real-Time Modeling of Deformable Linear Objects
[ "Yizhou Chen", "Yiting Zhang", "Zachary Brei", "Tiancheng Zhang", "Yuzhen Chen", "Julie Wu", "Ram Vasudevan" ]
Conference
Poster
2406.05931
[ "https://github.com/roahmlab/DEFORM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://roahmlab.github.io/DEFORM/
null
https://openreview.net/forum?id=Uaaj4MaVIQ
@inproceedings{ wang2024dfields, title={D\${\textasciicircum}3\$Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Rearrangement}, author={Yixuan Wang and Mingtong Zhang and Zhuoran Li and Tarik Kelestemur and Katherine Rose Driggs-Campbell and Jiajun Wu and Li Fei-Fei and Yunzhu Li}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=Uaaj4MaVIQ} }
Scene representation is a crucial design choice in robotic manipulation systems. An ideal representation is expected to be 3D, dynamic, and semantic to meet the demands of diverse manipulation tasks. However, previous works often lack all three properties simultaneously. In this work, we introduce D$^3$Fields---**dynamic 3D descriptor fields**. These fields are **implicit 3D representations** that take in 3D points and output semantic features and instance masks. They can also capture the dynamics of the underlying 3D environments. Specifically, we project arbitrary 3D points in the workspace onto multi-view 2D visual observations and interpolate features derived from visual foundational models. The resulting fused descriptor fields allow for flexible goal specifications using 2D images with varied contexts, styles, and instances. To evaluate the effectiveness of these descriptor fields, we apply our representation to rearrangement tasks in a zero-shot manner. Through extensive evaluation in real worlds and simulations, we demonstrate that D$^3$Fields are effective for **zero-shot generalizable** rearrangement tasks. We also compare D$^3$Fields with state-of-the-art implicit 3D representations and show significant improvements in effectiveness and efficiency. Project page: https://robopil.github.io/d3fields/
D^3Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Rearrangement
[ "Yixuan Wang", "Mingtong Zhang", "Zhuoran Li", "Tarik Kelestemur", "Katherine Rose Driggs-Campbell", "Jiajun Wu", "Li Fei-Fei", "Yunzhu Li" ]
Conference
Oral
[ "https://github.com/WangYixuan12/d3fields" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://robopil.github.io/d3fields/
null
https://openreview.net/forum?id=UUZ4Yw3lt0
@inproceedings{ jiang2024harmon, title={Harmon: Whole-Body Motion Generation of Humanoid Robots from Language Descriptions}, author={Zhenyu Jiang and Yuqi Xie and Jinhan Li and Ye Yuan and Yifeng Zhu and Yuke Zhu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=UUZ4Yw3lt0} }
Humanoid robots, with their human-like embodiment, have the potential to integrate seamlessly into human environments. Critical to their coexistence and cooperation with humans is the ability to understand natural language communications and exhibit human-like behaviors. This work focuses on generating diverse whole-body motions for humanoid robots from language descriptions. We leverage human motion priors from extensive human motion datasets to initialize humanoid motions and employ the commonsense reasoning capabilities of Vision Language Models (VLMs) to edit and refine these motions. Our approach demonstrates the capability to produce natural, expressive, and text-aligned humanoid motions, validated through both simulated and real-world experiments. More videos can be found on our website https://ut-austin-rpl.github.io/Harmon/.
Harmon: Whole-Body Motion Generation of Humanoid Robots from Language Descriptions
[ "Zhenyu Jiang", "Yuqi Xie", "Jinhan Li", "Ye Yuan", "Yifeng Zhu", "Yuke Zhu" ]
Conference
Poster
2410.12773
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://ut-austin-rpl.github.io/Harmon/
null
https://openreview.net/forum?id=URj5TQTAXM
@inproceedings{ li2024okami, title={{OKAMI}: Teaching Humanoid Robots Manipulation Skills through Single Video Imitation}, author={Jinhan Li and Yifeng Zhu and Yuqi Xie and Zhenyu Jiang and Mingyo Seo and Georgios Pavlakos and Yuke Zhu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=URj5TQTAXM} }
We study the problem of teaching humanoid robots manipulation skills by imitating from single video demonstrations. We introduce OKAMI, a method that generates a manipulation plan from a single RGB-D video and derives a policy for execution. At the heart of our approach is object-aware retargeting, which enables the humanoid robot to mimic the human motions in an RGB-D video while adjusting to different object locations during deployment. OKAMI uses open-world vision models to identify task-relevant objects and retarget the body motions and hand poses separately. Our experiments show that OKAMI achieves strong generalizations across varying visual and spatial conditions, outperforming the state-of-the-art baseline on open-world imitation from observation. Furthermore, OKAMI rollout trajectories are leveraged to train closed-loop visuomotor policies, which achieve an average success rate of $79.2\%$ without the need for labor-intensive teleoperation. More videos can be found on our website https://ut-austin-rpl.github.io/OKAMI/.
OKAMI: Teaching Humanoid Robots Manipulation Skills through Single Video Imitation
[ "Jinhan Li", "Yifeng Zhu", "Yuqi Xie", "Zhenyu Jiang", "Mingyo Seo", "Georgios Pavlakos", "Yuke Zhu" ]
Conference
Oral
2410.11792
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://ut-austin-rpl.github.io/OKAMI/
null
https://openreview.net/forum?id=UHxPZgK33I
@inproceedings{ jiang2024roboexp, title={Robo{EXP}: Action-Conditioned Scene Graph via Interactive Exploration for Robotic Manipulation}, author={Hanxiao Jiang and Binghao Huang and Ruihai Wu and Zhuoran Li and Shubham Garg and Hooshang Nayyeri and Shenlong Wang and Yunzhu Li}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=UHxPZgK33I} }
We introduce the novel task of interactive scene exploration, wherein robots autonomously explore environments and produce an action-conditioned scene graph (ACSG) that captures the structure of the underlying environment. The ACSG accounts for both low-level information (geometry and semantics) and high-level information (action-conditioned relationships between different entities) in the scene. To this end, we present the Robotic Exploration (RoboEXP) system, which incorporates the Large Multimodal Model (LMM) and an explicit memory design to enhance our system's capabilities. The robot reasons about what and how to explore an object, accumulating new information through the interaction process and incrementally constructing the ACSG. Leveraging the constructed ACSG, we illustrate the effectiveness and efficiency of our RoboEXP system in facilitating a wide range of real-world manipulation tasks involving rigid, articulated objects, nested objects, and deformable objects. Project Page: https://jianghanxiao.github.io/roboexp-web/
RoboEXP: Action-Conditioned Scene Graph via Interactive Exploration for Robotic Manipulation
[ "Hanxiao Jiang", "Binghao Huang", "Ruihai Wu", "Zhuoran Li", "Shubham Garg", "Hooshang Nayyeri", "Shenlong Wang", "Yunzhu Li" ]
Conference
Poster
2402.15487
[ "https://github.com/Jianghanxiao/RoboEXP" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://jianghanxiao.github.io/roboexp-web/
null
https://openreview.net/forum?id=U5RPcnFhkq
@inproceedings{ han2024fetchbench, title={FetchBench: A Simulation Benchmark for Robot Fetching}, author={Beining Han and Meenal Parakh and Derek Geng and Jack A Defay and Gan Luyang and Jia Deng}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=U5RPcnFhkq} }
Fetching, which includes approaching, grasping, and retrieving, is a critical challenge for robot manipulation tasks. Existing methods primarily focus on table-top scenarios, which do not adequately capture the complexities of environments where both grasping and planning are essential. To address this gap, we propose a new benchmark FetchBench, featuring diverse procedural scenes that integrate both grasping and motion planning challenges. Additionally, FetchBench includes a data generation pipeline that collects successful fetch trajectories for use in imitation learning methods. We implement multiple baselines from the traditional sense-plan-act pipeline to end-to-end behavior models. Our empirical analysis reveals that these methods achieve a maximum success rate of only 20%, indicating substantial room for improvement. Additionally, we identify key bottlenecks within the sense-plan-act pipeline and make recommendations based on the systematic analysis.
FetchBench: A Simulation Benchmark for Robot Fetching
[ "Beining Han", "Meenal Parakh", "Derek Geng", "Jack A Defay", "Gan Luyang", "Jia Deng" ]
Conference
Poster
2406.11793
[ "https://github.com/princeton-vl/FetchBench-CORL2024" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=TzqKmIhcwq
@inproceedings{ yao2024structured, title={Structured Bayesian Meta-Learning for Data-Efficient Visual-Tactile Model Estimation}, author={Shaoxiong Yao and Yifan Zhu and Kris Hauser}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=TzqKmIhcwq} }
Estimating visual-tactile models of deformable objects is challenging because vision suffers from occlusion, while touch data is sparse and noisy. We propose a novel data-efficient method for dense heterogeneous model estimation by leveraging experience from diverse training objects. The method is based on Bayesian Meta-Learning (BML), which can mitigate overfitting high-capacity visual-tactile models by meta-learning an informed prior and naturally achieves few-shot online estimation via posterior estimation. However, BML requires a shared parametric model across tasks but visual-tactile models for diverse objects have different parameter spaces. To address this issue, we introduce Structured Bayesian Meta-Learning (SBML) that incorporates heterogeneous physics models, enabling learning from training objects with varying appearances and geometries. SBML performs zero-shot vision-only prediction of deformable model parameters and few-shot adaptation after a handful of touches. Experiments show that in two classes of heterogeneous objects, namely plants and shoes, SBML outperforms existing approaches in force and torque prediction accuracy in zero- and few-shot settings.
Structured Bayesian Meta-Learning for Data-Efficient Visual-Tactile Model Estimation
[ "Shaoxiong Yao", "Yifan Zhu", "Kris Hauser" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://shaoxiongyao.github.io/SBML
null
https://openreview.net/forum?id=Si2krRESZb
@inproceedings{ peng2024tiebot, title={TieBot: Learning to Knot a Tie from Visual Demonstration through a Real-to-Sim-to-Real Approach}, author={Weikun Peng and Jun Lv and Yuwei Zeng and Haonan Chen and Siheng Zhao and Jichen Sun and Cewu Lu and Lin Shao}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=Si2krRESZb} }
The tie-knotting task is highly challenging due to the tie's high deformation and long-horizon manipulation actions. This work presents TieBot, a Real-to-Sim-to-Real learning from visual demonstration system for the robots to learn to knot a tie. We introduce the Hierarchical Feature Matching approach to estimate a sequence of tie's meshes from the demonstration video. With these estimated meshes used as subgoals, we first learn a teacher policy using privileged information. Then, we learn a student policy with point cloud observation by imitating teacher policy. Lastly, our pipeline applies learned policy to real-world execution. We demonstrate the effectiveness of TieBot in simulation and the real world. In the real-world experiment, a dual-arm robot successfully knots a tie, achieving 50% success rate among 10 trials. Videos can be found on https://tiebots.github.io/.
TieBot: Learning to Knot a Tie from Visual Demonstration through a Real-to-Sim-to-Real Approach
[ "Weikun Peng", "Jun Lv", "Yuwei Zeng", "Haonan Chen", "Siheng Zhao", "Jichen Sun", "Cewu Lu", "Lin Shao" ]
Conference
Oral
2407.03245
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://tiebots.github.io/
null
https://openreview.net/forum?id=SfaB20rjVo
@inproceedings{ bauer2024an, title={An Open-Source Soft Robotic Platform for Autonomous Aerial Manipulation in the Wild}, author={Erik Bauer and Marc Bl{\"o}chlinger and Pascal Strauch and Arman Raayatsanati and Cavelti Curdin and Robert K. Katzschmann}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=SfaB20rjVo} }
Aerial manipulation combines the versatility and speed of flying platforms with the functional capabilities of mobile manipulation, which presents significant challenges due to the need for precise localization and control. Traditionally, researchers have relied on off-board perception systems, which are limited to expensive and impractical specially equipped indoor environments. In this work, we introduce a novel platform for autonomous aerial manipulation that exclusively utilizes onboard perception systems. Our platform can perform aerial manipulation in various indoor and outdoor environments without depending on external perception systems. Our experimental results demonstrate the platform's ability to autonomously grasp various objects in diverse settings. This advancement significantly improves the scalability and practicality of aerial manipulation applications by eliminating the need for costly tracking solutions. To accelerate future research, we open source our modern ROS 2 software stack and custom hardware design, making our contributions accessible to the broader research community.
An Open-Source Soft Robotic Platform for Autonomous Aerial Manipulation in the Wild
[ "Erik Bauer", "Marc Blöchlinger", "Pascal Strauch", "Arman Raayatsanati", "Cavelti Curdin", "Robert K. Katzschmann" ]
Conference
Poster
2409.07662
[ "https://github.com/srl-ethz/osprey" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sites.google.com/view/open-source-soft-platform/open-source-soft-robotic-platform
null
https://openreview.net/forum?id=SW8ntpJl0E
@inproceedings{ kadi2024mjtn, title={{MJ}-{TN}: Pick-and-Place Towel Shaping from Crumpled States based on Transporter Net with Mask-Filtered Joint-Probability Action Inference}, author={Halid Abdulrahim Kadi and Kasim Terzi{\'c}}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=SW8ntpJl0E} }
Towel manipulation is a crucial step towards more general cloth manipulation. However, folding a towel from an arbitrarily crumpled state and recovering from a failed folding step remain critical challenges in robotics. We propose joint-probability action inference JA-TN, as a way to improve TransporterNet's operational efficiency; to our knowledge, this is the first single data-driven policy to achieve various types of folding from most crumpled states. We present three benchmark domains with a set of shaping tasks and the corresponding oracle policies to facilitate the further development of the field. We also present a simulation-to-reality transfer procedure for vision-based deep learning controllers by processing and augmenting RGB and/or depth images. We also demonstrate JA-TN's ability to integrate with a real camera and a UR3e robot arm, showcasing the method's applicability to real-world tasks.
JA-TN: Pick-and-Place Towel Shaping from Crumpled States based on TransporterNet with Joint-Probability Action Inference
[ "Halid Abdulrahim Kadi", "Kasim Terzić" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=SFJz5iLvur
@inproceedings{ wang2024lessons, title={Lessons from Learning to Spin {\textquotedblleft}Pens{\textquotedblright}}, author={Jun Wang and Ying Yuan and Haichuan Che and Haozhi Qi and Yi Ma and Jitendra Malik and Xiaolong Wang}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=SFJz5iLvur} }
In-hand manipulation of pen-like objects is a most basic and important skill in our daily lives, as many tools such as hammers and screwdrivers are similarly shaped. However, current learning-based methods struggle with this task due to a lack of high-quality demonstrations and the significant gap between simulation and the real world. In this work, we push the boundaries of learning-based in-hand manipulation systems by demonstrating the capability to spin pen-like objects. We use reinforcement learning to train a policy and generate a high-fidelity trajectory dataset in simulation. This serves two purposes: 1) pre-training a sensorimotor policy in simulation; 2) conducting open-loop trajectory replay in the real world. We then fine-tune the sensorimotor policy using these real-world trajectories to adapt to the real world. With less than 50 trajectories, our policy learns to rotate more than ten pen-like objects with different physical properties for multiple revolutions. We present a comprehensive analysis of our design choices and share the lessons learned during development. Videos are shown on https://corl-2024-dexpen.github.io/.
Lessons from Learning to Spin “Pens”
[ "Jun Wang", "Ying Yuan", "Haichuan Che", "Haozhi Qi", "Yi Ma", "Jitendra Malik", "Xiaolong Wang" ]
Conference
Poster
[ "https://github.com/HaozhiQi/penspin" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://penspin.github.io/
null
https://openreview.net/forum?id=S8jQtafbT3
@inproceedings{ xiong2024autonomous, title={Autonomous Interactive Correction {MLLM} for Robust Robotic Manipulation}, author={Chuyan Xiong and Chengyu Shen and Xiaoqi Li and Kaichen Zhou and Jiaming Liu and Ruiping Wang and Hao Dong}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=S8jQtafbT3} }
The ability to reflect on and correct failures is crucial for robotic systems to interact stably with real-life objects. Observing the generalization and reasoning capabilities of Multimodal Large Language Models (MLLMs), previous approaches have aimed to utilize these models to enhance robotic systems accordingly. However, these methods typically focus on high-level planning corrections using an additional MLLM, with limited utilization of failed samples to correct low-level contact poses. To address this gap, we propose an Autonomous Interactive Correction (AIC) MLLM, which makes use of previous low-level interaction experiences to correct SE(3) pose predictions. Specifically, AIC MLLM is initially fine-tuned to acquire both pose prediction and feedback prompt comprehension abilities. We carefully design two types of prompt instructions through interactions with objects: 1) visual masks to highlight unmovable parts for position correction, and 2) textual descriptions to indicate potential directions for rotation correction. During inference, a Feedback Information Extraction module is introduced to recognize the failure cause, allowing AIC MLLM to adaptively correct the pose prediction using the corresponding prompts. To further enhance manipulation stability, we devise a Test Time Adaptation strategy that enables AIC MLLM to better adapt to the current scene configuration. Finally, extensive experiments are conducted in both simulated and real-world environments to evaluate the proposed method. The results demonstrate that our AIC MLLM can efficiently correct failure samples by leveraging interaction experience prompts.
Autonomous Interactive Correction MLLM for Robust Robotic Manipulation
[ "Chuyan Xiong", "Chengyu Shen", "Xiaoqi Li", "Kaichen Zhou", "Jiaming Liu", "Ruiping Wang", "Hao Dong" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=S70MgnIA0v
@inproceedings{ zawalski2024robotic, title={Robotic Control via Embodied Chain-of-Thought Reasoning}, author={Micha{\l} Zawalski and William Chen and Karl Pertsch and Oier Mees and Chelsea Finn and Sergey Levine}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=S70MgnIA0v} }
A key limitation of learned robot control policies is their inability to generalize outside their training data. Recent works on vision-language-action models (VLAs) have shown that the use of large, internet pre-trained vision-language models as the backbone of learned robot policies can substantially improve their robustness and generalization ability. Yet, one of the most exciting capabilities of large vision-language models in other domains is their ability to reason iteratively through complex problems. Can that same capability be brought into robotics to allow policies to improve performance by reasoning about a given task before acting? Naive use of "chain-of-thought" (CoT) style prompting is significantly less effective with standard VLAs because of the relatively simple training examples that are available to them. Additionally, purely semantic reasoning about sub-tasks, as is common in regular CoT, is insufficient for robot policies that need to ground their reasoning in sensory observations and the robot state. To this end, we introduce Embodied Chain-of-Thought Reasoning (ECoT) for VLAs, in which we train VLAs to perform multiple steps of reasoning about plans, sub-tasks, motions, and visually grounded features like object bounding boxes and end effector positions, before predicting the robot action. We design a scalable pipeline for generating synthetic training data for ECoT on large robot datasets. We demonstrate, that ECoT increases the absolute success rate of OpenVLA, the current strongest open-source VLA policy, by 28\% across challenging generalization tasks, without any additional robot training data. Additionally, ECoT makes it easier for humans to interpret a policy's failures and correct its behavior using natural language.
Robotic Control via Embodied Chain-of-Thought Reasoning
[ "Michał Zawalski", "William Chen", "Karl Pertsch", "Oier Mees", "Chelsea Finn", "Sergey Levine" ]
Conference
Poster
2407.08693
[ "https://github.com/MichalZawalski/embodied-CoT/" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
embodied-cot.github.io
null
https://openreview.net/forum?id=S2Jwb0i7HN
@inproceedings{ lum2024dextrahg, title={Dextr{AH}-G: Pixels-to-Action Dexterous Arm-Hand Grasping with Geometric Fabrics}, author={Tyler Ga Wei Lum and Martin Matak and Viktor Makoviychuk and Ankur Handa and Arthur Allshire and Tucker Hermans and Nathan D. Ratliff and Karl Van Wyk}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=S2Jwb0i7HN} }
A pivotal challenge in robotics is achieving fast, safe, and robust dexterous grasping across a diverse range of objects, an important goal within industrial applications. However, existing methods often have very limited speed, dexterity, and generality, along with limited or no hardware safety guarantees. In this work, we introduce DextrAH-G, a depth-based dexterous grasping policy trained entirely in simulation that combines reinforcement learning, geometric fabrics, and teacher-student distillation. We address key challenges in joint arm-hand policy learning, such as high-dimensional observation and action spaces, the sim2real gap, collision avoidance, and hardware constraints. DextrAH-G enables a 23 motor arm-hand robot to safely and continuously grasp and transport a large variety of objects at high speed using multi-modal inputs including depth images, allowing generalization across object geometry. Videos at https://sites.google.com/view/dextrah-g.
DextrAH-G: Pixels-to-Action Dexterous Arm-Hand Grasping with Geometric Fabrics
[ "Tyler Ga Wei Lum", "Martin Matak", "Viktor Makoviychuk", "Ankur Handa", "Arthur Allshire", "Tucker Hermans", "Nathan D. Ratliff", "Karl Van Wyk" ]
Conference
Poster
2407.02274
[ "" ]
https://huggingface.co/papers/2407.02274
1
0
0
8
[]
[]
[]
[]
[]
[]
1
https://sites.google.com/view/dextrah-g
null
https://openreview.net/forum?id=RMkdcKK7jq
@inproceedings{ chen2024slr, title={{SLR}: Learning Quadruped Locomotion without Privileged Information}, author={Shiyi Chen and Zeyu Wan and Shiyang Yan and Chun Zhang and Weiyi Zhang and Qiang Li and Debing Zhang and Fasih Ud Din Farrukh}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=RMkdcKK7jq} }
Traditional reinforcement learning control for quadruped robots often relies on privileged information, demanding meticulous selection and precise estimation, thereby imposing constraints on the development process. This work proposes a Self-learning Latent Representation (SLR) method, which achieves high-performance control policy learning without the need for privileged information. To enhance the credibility of our proposed method's evaluation, SLR is compared with open-source code repositories of state-of-the-art algorithms, retaining the original authors' configuration parameters. Across four repositories, SLR consistently outperforms the reference results. Ultimately, the trained policy and encoder empower the quadruped robot to navigate steps, climb stairs, ascend rocks, and traverse various challenging terrains.
SLR: Learning Quadruped Locomotion without Privileged Information
[ "Shiyi Chen", "Zeyu Wan", "Shiyang Yan", "Chun Zhang", "Weiyi Zhang", "Qiang Li", "Debing Zhang", "Fasih Ud Din Farrukh" ]
Conference
Poster
2406.04835
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://11chens.github.io/SLR/
null
https://openreview.net/forum?id=Qz2N4lWBk3
@inproceedings{ hu2024learning, title={Learning Granular Media Avalanche Behavior for Indirectly Manipulating Obstacles on a Granular Slope}, author={Haodi Hu and Feifei Qian and Daniel Seita}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=Qz2N4lWBk3} }
Legged robot locomotion on sand slopes is challenging due to the complex dynamics of granular media and how the lack of solid surfaces can hinder locomotion. A promising strategy, inspired by ghost crabs and other organisms in nature, is to strategically interact with rocks, debris, and other obstacles to facilitate movement. To provide legged robots with this ability, we present a novel approach that leverages avalanche dynamics to indirectly manipulate objects on a granular slope. We use a Vision Transformer (ViT) to process image representations of granular dynamics and robot excavation actions. The ViT predicts object movement, which we use to determine which leg excavation action to execute. We collect training data from 100 real physical trials and, at test time, deploy our trained model in novel settings. Experimental results suggest that our model can accurately predict object movements and achieve a success rate ≥ 80% in a variety of manipulation tasks with up to four obstacles, and can also generalize to objects with different physics properties. To our knowledge, this is the first paper to leverage granular media avalanche dynamics to indirectly manipulate objects on granular slopes. Supplementary material is available at https://sites.google.com/view/grain-corl2024/home.
Learning Granular Media Avalanche Behavior for Indirectly Manipulating Obstacles on a Granular Slope
[ "Haodi Hu", "Feifei Qian", "Daniel Seita" ]
Conference
Poster
2407.01898
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sites.google.com/view/grain-corl2024/home
null
https://openreview.net/forum?id=QtCtY8zl2T
@inproceedings{ goko2024task, title={Task Success Prediction for Open-Vocabulary Manipulation Based on Multi-Level Aligned Representations}, author={Miyu Goko and Motonari Kambara and Daichi Saito and Seitaro Otsuki and Komei Sugiura}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=QtCtY8zl2T} }
In this study, we consider the problem of predicting task success for open-vocabulary manipulation by a manipulator, based on instruction sentences and egocentric images before and after manipulation. Conventional approaches, including multimodal large language models (MLLMs), often fail to appropriately understand detailed characteristics of objects and/or subtle changes in the position of objects. We propose Contrastive $\lambda$-Repformer, which predicts task success for table-top manipulation tasks by aligning images with instruction sentences. Our method integrates the following three key types of features into a multi-level aligned representation: features that preserve local image information; features aligned with natural language; and features structured through natural language. This allows the model to focus on important changes by looking at the differences in the representation between two images. We evaluate Contrastive $\lambda$-Repformer on a dataset based on a large-scale standard dataset, the RT-1 dataset, and on a physical robot platform. The results show that our approach outperformed existing approaches including MLLMs. Our best model achieved an improvement of 8.66 points in accuracy compared to the representative MLLM-based model.
Task Success Prediction for Open-Vocabulary Manipulation Based on Multi-Level Aligned Representations
[ "Miyu Goko", "Motonari Kambara", "Daichi Saito", "Seitaro Otsuki", "Komei Sugiura" ]
Conference
Poster
2410.00436
[ "https://github.com/keio-smilab24/contrastive-lambda-repformer" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://5ei74r0.github.io/contrastive-lambda-repformer.page/
null
https://openreview.net/forum?id=Qpjo8l8AFW
@inproceedings{ zhang2024leveraging, title={Leveraging Locality to Boost Sample Efficiency in Robotic Manipulation}, author={Tong Zhang and Yingdong Hu and Jiacheng You and Yang Gao}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=Qpjo8l8AFW} }
Given the high cost of collecting robotic data in the real world, sample efficiency is a consistently compelling pursuit in robotics. In this paper, we introduce SGRv2, an imitation learning framework that enhances sample efficiency through improved visual and action representations. Central to the design of SGRv2 is the incorporation of a critical inductive bias—$\textit{action locality}$, which posits that robot's actions are predominantly influenced by the target object and its interactions with the local environment. Extensive experiments in both simulated and real-world settings demonstrate that action locality is essential for boosting sample efficiency. SGRv2 excels in RLBench tasks with keyframe control using merely 5 demonstrations and surpasses the RVT baseline in 23 of 26 tasks. Furthermore, when evaluated on ManiSkill2 and MimicGen using dense control, SGRv2's success rate is 2.54 times that of SGR. In real-world environments, with only eight demonstrations, SGRv2 can perform a variety of tasks at a markedly higher success rate compared to baseline models.
Leveraging Locality to Boost Sample Efficiency in Robotic Manipulation
[ "Tong Zhang", "Yingdong Hu", "Jiacheng You", "Yang Gao" ]
Conference
Poster
2406.10615
[ "https://github.com/TongZhangTHU/sgr" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sgrv2-robot.github.io
null
https://openreview.net/forum?id=Qoy12gkH4C
@inproceedings{ mohan2024progressive, title={Progressive Multi-Modal Fusion for Robust 3D Object Detection}, author={Rohit Mohan and Daniele Cattaneo and Abhinav Valada and Florian Drews}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=Qoy12gkH4C} }
Multi-sensor fusion is crucial for accurate 3D object detection in autonomous driving, with cameras and LiDAR being the most commonly used sensors. However, existing methods perform sensor fusion in a single view by projecting features from both modalities either in Bird's Eye View (BEV) or Perspective View (PV), thus sacrificing complementary information such as height or geometric proportions. To address this limitation, we propose ProFusion3D, a progressive fusion framework that combines features in both BEV and PV at both intermediate and object query levels. Our architecture hierarchically fuses local and global features, enhancing the robustness of 3D object detection. Additionally, we introduce a self-supervised mask modeling pre-training strategy to improve multi-modal representation learning and data efficiency through three novel objectives. Extensive experiments on nuScenes and Argoverse2 datasets conclusively demonstrate the efficacy of ProFusion3D. Moreover, ProFusion3D is robust to sensor failure, showing strong performance when only one modality is available.
Progressive Multi-Modal Fusion for Robust 3D Object Detection
[ "Rohit Mohan", "Daniele Cattaneo", "Florian Drews", "Abhinav Valada" ]
Conference
Poster
2410.07475
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
http://profusion3d.cs.uni-freiburg.de/
null
https://openreview.net/forum?id=QUzwHYJ9Hf
@inproceedings{ tziafas2024towards, title={Towards Open-World Grasping with Large Vision-Language Models}, author={Georgios Tziafas and Hamidreza Kasaei}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=QUzwHYJ9Hf} }
The ability to grasp objects in-the-wild from open-ended language instructions constitutes a fundamental challenge in robotics. An open-world grasping system should be able to combine high-level contextual with low-level physical-geometric reasoning in order to be applicable in arbitrary scenarios. Recent works exploit the web-scale knowledge inherent in large language models (LLMs) to plan and reason in robotic context, but rely on external vision and action models to ground such knowledge into the environment and parameterize actuation. This setup suffers from two major bottlenecks: a) the LLM's reasoning capacity is constrained by the quality of visual grounding, and b) LLMs do not contain low-level spatial understanding of the world, which is essential for grasping in contact-rich scenarios. In this work we demonstrate that modern vision-language models (VLMs) are capable of tackling such limitations, as they are implicitly grounded and can jointly reason about semantics and geometry. We propose \texttt{OWG}, an open-world grasping pipeline that combines VLMs with segmentation and grasp synthesis models to unlock grounded world understanding in three stages: open-ended referring segmentation, grounded grasp planning and grasp ranking via contact reasoning, all of which can be applied zero-shot via suitable visual prompting mechanisms. We conduct extensive evaluation in cluttered indoor scene datasets to showcase \texttt{OWG}'s robustness in grounding from open-ended language, as well as open-world robotic grasping experiments in both simulation and hardware that demonstrate superior performance compared to previous supervised and zero-shot LLM-based methods.
Towards Open-World Grasping with Large Vision-Language Models
[ "Georgios Tziafas", "Hamidreza Kasaei" ]
Conference
Poster
2406.18722
[ "https://github.com/gtziafas/OWG" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://gtziafas.github.io/OWG_project/
null
https://openreview.net/forum?id=Q2lGXMZCv8
@inproceedings{ niu2024llarva, title={{LLARVA}: Vision-Action Instruction Tuning Enhances Robot Learning}, author={Dantong Niu and Yuvan Sharma and Giscard Biamby and Jerome Quenum and Yutong Bai and Baifeng Shi and Trevor Darrell and Roei Herzig}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=Q2lGXMZCv8} }
In recent years, instruction-tuned Large Multimodal Models (LMMs) have been successful at several tasks, including image captioning and visual question answering; yet leveraging these models remains an open question for robotics. Prior LMMs for robotics applications have been extensively trained on language and action data, but their ability to generalize in different settings has often been less than desired. To address this, we introduce LLARVA, a model trained with a novel instruction tuning method that leverages structured prompts to unify a range of robotic learning tasks, scenarios, and environments. Additionally, we show that predicting intermediate 2-D representations, which we refer to as *visual traces*, can help further align vision and action spaces for robot learning. We generate 8.5M image-visual trace pairs from the Open X-Embodiment dataset in order to pre-train our model, and we evaluate on 12 different tasks in the RLBench simulator as well as a physical Franka Emika Panda 7-DoF robot. Our experiments yield strong performance, demonstrating that LLARVA — using 2-D and language representations — performs well compared to several contemporary baselines, and can generalize across various robot environments and configurations.
LLARVA: Vision-Action Instruction Tuning Enhances Robot Learning
[ "Dantong Niu", "Yuvan Sharma", "Giscard Biamby", "Jerome Quenum", "Yutong Bai", "Baifeng Shi", "Trevor Darrell", "Roei Herzig" ]
Conference
Poster
2406.11815
[ "" ]
https://huggingface.co/papers/2406.11815
0
1
0
8
[]
[]
[]
[]
[]
[]
1
https://llarva24.github.io/
null
https://openreview.net/forum?id=PbQOZntuXO
@inproceedings{ bohlinger2024one, title={One Policy to Run Them All: an End-to-end Learning Approach to Multi-Embodiment Locomotion}, author={Nico Bohlinger and Grzegorz Czechmanowski and Maciej Piotr Krupka and Piotr Kicki and Krzysztof Walas and Jan Peters and Davide Tateo}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=PbQOZntuXO} }
Deep Reinforcement Learning techniques are achieving state-of-the-art results in robust legged locomotion. While there exists a wide variety of legged platforms such as quadruped, humanoids, and hexapods, the field is still missing a single learning framework that can control all these different embodiments easily and effectively and possibly transfer, zero or few-shot, to unseen robot embodiments. To close this gap, we introduce URMA, the Unified Robot Morphology Architecture. Our framework brings the end-to-end Multi-Task Reinforcement Learning approach to the realm of legged robots, enabling the learned policy to control any type of robot morphology. The key idea of our method is to allow the network to learn an abstract locomotion controller that can be seamlessly shared between embodiments thanks to our morphology-agnostic encoders and decoders. This flexible architecture can be seen as a first step in building a foundation model for legged robot locomotion. Our experiments show that URMA can learn a locomotion policy on multiple embodiments that can be easily transferred to unseen robot platforms in simulation and the real world.
One Policy to Run Them All: an End-to-end Learning Approach to Multi-Embodiment Locomotion
[ "Nico Bohlinger", "Grzegorz Czechmanowski", "Maciej Piotr Krupka", "Piotr Kicki", "Krzysztof Walas", "Jan Peters", "Davide Tateo" ]
Conference
Poster
2409.06366
[ "https://github.com/nico-bohlinger/one_policy_to_run_them_all" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://nico-bohlinger.github.io/one_policy_to_run_them_all_website/
null
https://openreview.net/forum?id=PAtsxVz0ND
@inproceedings{ lyu2024scissorbot, title={ScissorBot: Learning Generalizable Scissor Skill for Paper Cutting via Simulation, Imitation, and Sim2Real}, author={Jiangran Lyu and Yuxing Chen and Tao Du and Feng Zhu and Huiquan Liu and Yizhou Wang and He Wang}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=PAtsxVz0ND} }
This paper tackles the challenging robotic task of generalizable paper cutting using scissors. In this task, scissors attached to a robot arm are driven to accurately cut curves drawn on the paper, which is hung with the top edge fixed. Due to the frequent paper-scissor contact and consequent fracture, the paper features continual deformation and changing topology, which is diffult for accurate modeling.To deal with such versatile scenarios, we propose ScissorBot, the first learning-based system for robotic paper cutting with scissors via simulation, imitation learning and sim2real. Given the lack of sufficient data for this task, we build PaperCutting-Sim, a paper simulator supporting interactive fracture coupling with scissors, enabling demonstration generation with a heuristic-based oracle policy. To ensure effective execution, we customize an action primitive sequence for imitation learning to constrain its action space, thus alleviating potential compounding errors. Finally, by integrating sim-to-real techniques to bridge the gap between simulation and reality, our policy can be effectively deployed on the real robot. Experimental results demonstrate that our method surpasses all baselines in both simulation and real-world benchmarks and achives performance comparable to human operation with a single hand under the same conditions.
ScissorBot: Learning Generalizable Scissor Skill for Paper Cutting via Simulation, Imitation, and Sim2Real
[ "Jiangran Lyu", "Yuxing Chen", "Tao Du", "Feng Zhu", "Huiquan Liu", "Yizhou Wang", "He Wang" ]
Conference
Poster
2409.13966
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://pku-epic.github.io/ScissorBot/
null
https://openreview.net/forum?id=OznnnxPLiH
@inproceedings{ wagner2024jointmotion, title={JointMotion: Joint Self-supervision for Joint Motion Prediction}, author={Royden Wagner and Omer Sahin Tas and Marvin Klemp and Carlos Fernandez}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=OznnnxPLiH} }
We present JointMotion, a self-supervised pre-training method for joint motion prediction in self-driving vehicles. Our method jointly optimizes a scene-level objective connecting motion and environments, and an instance-level objective to refine learned representations. Scene-level representations are learned via non-contrastive similarity learning of past motion sequences and environment context. At the instance level, we use masked autoencoding to refine multimodal polyline representations. We complement this with an adaptive pre-training decoder that enables JointMotion to generalize across different environment representations, fusion mechanisms, and dataset characteristics. Notably, our method reduces the joint final displacement error of Wayformer, HPTR, and Scene Transformer models by 3%, 8%, and 12%, respectively; and enables transfer learning between the Waymo Open Motion and the Argoverse 2 Motion Forecasting datasets.
JointMotion: Joint Self-Supervision for Joint Motion Prediction
[ "Royden Wagner", "Omer Sahin Tas", "Marvin Klemp", "Carlos Fernandez" ]
Conference
Poster
2403.05489
[ "https://github.com/kit-mrt/future-motion" ]
https://huggingface.co/papers/2012.11717
1
0
0
3
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=Oce2215aJE
@inproceedings{ sferrazza2024body, title={Body Transformer: Leveraging Robot Embodiment for Policy Learning}, author={Carmelo Sferrazza and Dun-Ming Huang and Fangchen Liu and Jongmin Lee and Pieter Abbeel}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=Oce2215aJE} }
In recent years, the transformer architecture has become the de-facto standard for machine learning algorithms applied to natural language processing and computer vision. Despite notable evidence of successful deployment of this architecture in the context of robot learning, we claim that vanilla transformers do not fully exploit the structure of the robot learning problem. We propose Body Transformer (BoT), an architecture that exploits the robot embodiment by providing an inductive bias that guides the learning process. We represent the robot body as a graph of sensors and actuators, and rely on masked attention to pool information through the architecture. The resulting architecture outperforms the vanilla transformer, as well as the classical multilayer perceptron, with respect to task completion, scaling properties, and computational efficiency when representing either imitation or reinforcement learning policies.
Body Transformer: Leveraging Robot Embodiment for Policy Learning
[ "Carmelo Sferrazza", "Dun-Ming Huang", "Fangchen Liu", "Jongmin Lee", "Pieter Abbeel" ]
Conference
Poster
2408.06316
[ "https://github.com/carlosferrazza/BodyTransformer" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sferrazza.cc/bot_site/
null
https://openreview.net/forum?id=OGjGtN6hoo
@inproceedings{ peng2024adaptive, title={Adaptive Language-Guided Abstraction from Contrastive Explanations}, author={Andi Peng and Belinda Z. Li and Ilia Sucholutsky and Nishanth Kumar and Julie Shah and Jacob Andreas and Andreea Bobu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=OGjGtN6hoo} }
Many approaches to robot learning begin by inferring a reward function from a set of human demonstrations. To learn a good reward, it is necessary to determine which features of the environment are relevant before determining how these features should be used to compute reward. In particularly complex, high-dimensional environments, human demonstrators often struggle to fully specify their desired behavior from a small number of demonstrations. End-to-end reward learning methods (e.g., using deep networks or program synthesis techniques) often yield brittle reward functions that are sensitive to spurious state features. By contrast, humans can often generalizably learn from a small number of demonstrations by incorporating strong priors about what features of a demonstration are likely meaningful for a task of interest. How do we build robots that leverage this kind of background knowledge when learning from new demonstrations? This paper describes a method named ALGAE which alternates between using language models to iteratively identify human-meaningful features needed to explain demonstrated behavior, then standard inverse reinforcement learning techniques to assign weights to these features. Experiments across a variety of both simulated and real-world robot environments show that ALGAElearns generalizable reward functions defined on interpretable features using only small numbers of demonstrations. Importantly, ALGAE can recognize when features are missing, then extract and define those features without any human input -- making it possible to quickly and efficiently acquire rich representations of user behavior.
Adaptive Language-Guided Abstraction from Contrastive Explanations
[ "Andi Peng", "Belinda Z. Li", "Ilia Sucholutsky", "Nishanth Kumar", "Julie Shah", "Jacob Andreas", "Andreea Bobu" ]
Conference
Poster
2409.08212
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=O0oK2bVist
@inproceedings{ cui2024adapting, title={Adapting Humanoid Locomotion over Challenging Terrain via Two-Phase Training}, author={Wenhao Cui and Shengtao Li and Huaxing Huang and Bangyu Qin and Tianchu Zhang and hanjinchao and Liang Zheng and Ziyang Tang and Chenxu Hu and NING Yan and Jiahao Chen and Zheyuan Jiang}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=O0oK2bVist} }
Humanoid robots are a key focus in robotics, with their capacity to navigate tough terrains being essential for many uses. While strides have been made, creating adaptable locomotion for complex environments is still tough. Recent progress in learning-based systems offers hope for robust legged locomotion, but challenges persist, such as tracking accuracy at high speeds and on uneven ground, and joint oscillations in actual robots. This paper proposes a novel training framework to address these challenges by employing a two-phase training paradigm with reinforcement learning. The proposed framework is further enhanced through the integration of command curriculum learning, refining the precision and adaptability of our approach. Additionally, we adapt DreamWaQ to our humanoid locomotion system and improve it to mitigate joint oscillations. Finally, we achieve the sim-to-real transfer of our method. A series of empirical results demonstrate the superior performance of our proposed method compared to state-of-the-art methods.
Adapting Humanoid Locomotion over Challenging Terrain via Two-Phase Training
[ "Wenhao Cui", "Shengtao Li", "Huaxing Huang", "Bangyu Qin", "Tianchu Zhang", "hanjinchao", "Liang Zheng", "Ziyang Tang", "Chenxu Hu", "NING Yan", "Jiahao Chen", "Zheyuan Jiang" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sites.google.com/view/adapting-humanoid-locomotion/two-phase-training
null
https://openreview.net/forum?id=O05tIQt2d5
@inproceedings{ ren2024topnav, title={{TOP}-Nav: Legged Navigation Integrating Terrain, Obstacle and Proprioception Estimation}, author={Junli Ren and Yikai Liu and Yingru Dai and Junfeng Long and Guijin Wang}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=O05tIQt2d5} }
Legged navigation is typically examined within open-world, off-road, and challenging environments. In these scenarios, estimating external disturbances requires a complex synthesis of multi-modal information. This underlines a major limitation in existing works that primarily focus on avoiding obstacles. In this work, we propose TOP-Nav, a novel legged navigation framework that integrates a comprehensive path planner with Terrain awareness, Obstacle avoidance and close-loop Proprioception. TOP-Nav underscores the synergies between vision and proprioception in both path and motion planning. Within the path planner, we present a terrain estimator that enables the robot to select waypoints on terrains with higher traversability while effectively avoiding obstacles. In the motion planning level, we construct a proprioception advisor from the learning-based locomotion controller to provide motion evaluations for the path planner. Based on the close-loop motion feedback, we offer online corrections for the vision-based terrain and obstacle estimations. Consequently, TOP-Nav achieves open-world navigation that the robot can handle terrains or disturbances beyond the distribution of prior knowledge and overcomes constraints imposed by visual conditions. Building upon extensive experiments conducted in both simulation and real-world environments, TOP-Nav demonstrates superior performance in open-world navigation compared to existing methods.
TOP-Nav: Legged Navigation Integrating Terrain, Obstacle and Proprioception Estimation
[ "Junli Ren", "Yikai Liu", "Yingru Dai", "Junfeng Long", "Guijin Wang" ]
Conference
Poster
2404.15256
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://top-nav-legged.github.io/TOP-Nav-Legged-page/
null
https://openreview.net/forum?id=NiA8hVdDS7
@inproceedings{ kumawat2024robokoop, title={RoboKoop: Efficient Control Conditioned Representations from Visual Input in Robotics using Koopman Operator}, author={Hemant Kumawat and Biswadeep Chakraborty and Saibal Mukhopadhyay}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=NiA8hVdDS7} }
Developing agents that can perform complex control tasks from high-dimensional observations is a core ability of autonomous agents that requires underlying robust task control policies and adapting the underlying visual representations to the task. Most existing policies need a lot of training samples and treat this problem from the lens of two-stage learning with a controller learned on top of pre-trained vision models. We approach this problem from the lens of Koopman theory and learn visual representations from robotic agents conditioned on specific downstream tasks in the context of learning stabilizing control for the agent. We introduce a Contrastive Spectral Koopman Embedding network that allows us to learn efficient linearized visual representations from the agent's visual data in a high dimensional latent space and utilizes reinforcement learning to perform off-policy control on top of the extracted representations with a linear controller. Our method enhances stability and control in gradient dynamics over time, significantly outperforming existing approaches by improving efficiency and accuracy in learning task policies over extended horizons.
RoboKoop: Efficient Control Conditioned Representations from Visual Input in Robotics using Koopman Operator
[ "Hemant Kumawat", "Biswadeep Chakraborty", "Saibal Mukhopadhyay" ]
Conference
Poster
2409.03107
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=NCnplCf4wo
@inproceedings{ hong2024learning, title={Learning a Distributed Hierarchical Locomotion Controller for Embodied Cooperation}, author={Chuye Hong and Kangyao Huang and Huaping Liu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=NCnplCf4wo} }
In this work, we propose a distributed hierarchical locomotion control strategy for whole-body cooperation and demonstrate the potential for migration into large numbers of agents. Our method utilizes a hierarchical structure to break down complex tasks into smaller, manageable sub-tasks. By incorporating spatiotemporal continuity features, we establish the sequential logic necessary for causal inference and cooperative behaviour in sequential tasks, thereby facilitating efficient and coordinated control strategies. Through training within this framework, we demonstrate enhanced adaptability and cooperation, leading to superior performance in task completion compared to the original methods. Moreover, we construct a set of environments as the benchmark for embodied cooperation.
Learning a Distributed Hierarchical Locomotion Controller for Embodied Cooperation
[ "Chuye Hong", "Kangyao Huang", "Huaping Liu" ]
Conference
Poster
2407.06499
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://d-hrl.github.io/
null
https://openreview.net/forum?id=N5IS6DzBmL
@inproceedings{ feng2024play, title={Play to the Score: Stage-Guided Dynamic Multi-Sensory Fusion for Robotic Manipulation}, author={Ruoxuan Feng and Di Hu and Wenke Ma and Xuelong Li}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=N5IS6DzBmL} }
Humans possess a remarkable talent for flexibly alternating to different senses when interacting with the environment. Picture a chef skillfully gauging the timing of ingredient additions and controlling the heat according to the colors, sounds, and aromas, seamlessly navigating through every stage of the complex cooking process. This ability is founded upon a thorough comprehension of task stages, as achieving the sub-goal within each stage can necessitate the utilization of different senses. In order to endow robots with similar ability, we incorporate the task stages divided by sub-goals into the imitation learning process to accordingly guide dynamic multi-sensory fusion. We propose MS-Bot, a stage-guided dynamic multi-sensory fusion method with coarse-to-fine stage understanding, which dynamically adjusts the priority of modalities based on the fine-grained state within the predicted current stage. We train a robot system equipped with visual, auditory, and tactile sensors to accomplish challenging robotic manipulation tasks: pouring and peg insertion with keyway. Experimental results indicate that our approach enables more effective and explainable dynamic fusion, aligning more closely with the human fusion process than existing methods.
Play to the Score: Stage-Guided Dynamic Multi-Sensory Fusion for Robotic Manipulation
[ "Ruoxuan Feng", "Di Hu", "Wenke Ma", "Xuelong Li" ]
Conference
Oral
2408.01366
[ "https://github.com/GeWu-Lab/MS-Bot" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://gewu-lab.github.io/MS-Bot/
null
https://openreview.net/forum?id=N1K4B8N3n1
@inproceedings{ eappen2024scaling, title={Scaling Safe Multi-Agent Control for Signal Temporal Logic Specifications}, author={Joe Eappen and Zikang Xiong and Dipam Patel and Aniket Bera and Suresh Jagannathan}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=N1K4B8N3n1} }
Existing methods for safe multi-agent control using logic specifications like Signal Temporal Logic (STL) often face scalability issues. This is because they rely either on single-agent perspectives or on Mixed Integer Linear Programming (MILP)-based planners, which are complex to optimize. These methods have proven to be computationally expensive and inefficient when dealing with a large number of agents. To address these limitations, we present a new scalable approach to multi-agent control in this setting. Our method treats the relationships between agents using a graph structure rather than in terms of a single-agent perspective. Moreover, it combines a multi-agent collision avoidance controller with a Graph Neural Network (GNN) based planner, models the system in a decentralized fashion, and trains on STL-based objectives to generate safe and efficient plans for multiple agents, thereby optimizing the satisfaction of complex temporal specifications while also facilitating multi-agent collision avoidance. Our experiments show that our approach significantly outperforms existing methods that use a state-of-the-art MILP-based planner in terms of scalability and performance.
Scaling Safe Multi-Agent Control for Signal Temporal Logic Specifications
[ "Joe Eappen", "Zikang Xiong", "Dipam Patel", "Aniket Bera", "Suresh Jagannathan" ]
Conference
Poster
[ "https://github.com/jeappen/mastl-gcbf" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://jeappen.github.io/mastl-gcbf-website/
null
https://openreview.net/forum?id=MyyZZAPgpy
@inproceedings{ lepert2024shadow, title={{SHADOW}: Leveraging Segmentation Masks for Cross-Embodiment Policy Transfer}, author={Marion Lepert and Ria Doshi and Jeannette Bohg}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=MyyZZAPgpy} }
Data collection in robotics is spread across diverse hardware, and this variation will increase as new hardware is developed. Effective use of this growing body of data requires methods capable of learning from diverse robot embodiments. We consider the setting of training a policy using expert trajectories from a single robot arm (the source), and evaluating on a different robot arm for which no data was collected (the target). We present a data editing scheme termed Shadow, in which the robot during training and evaluation is replaced with a composite segmentation mask of the source and target robots. In this way, the input data distribution at train and test time match closely, enabling robust policy transfer to the new unseen robot while being far more data efficient than approaches that require co-training on large amounts of data from diverse embodiments. We demonstrate that an approach as simple as Shadow is effective both in simulation on varying tasks and robots, and on real robot hardware, where Shadow demonstrates over 2x improvement in success rate compared to the strongest baseline.
SHADOW: Leveraging Segmentation Masks for Cross-Embodiment Policy Transfer
[ "Marion Lepert", "Ria Doshi", "Jeannette Bohg" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://shadow-cross-embodiment.github.io/
null
https://openreview.net/forum?id=MwZJ96Okl3
@inproceedings{ biswas2024modeling, title={Modeling Drivers{\textquoteright} Situational Awareness from Eye Gaze for Driving Assistance}, author={Abhijat Biswas and Pranay Gupta and Shreeya Khurana and David Held and Henny Admoni}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=MwZJ96Okl3} }
Intelligent driving assistance can alert drivers to objects in their environment; however, such systems require a model of drivers' situational awareness (SA) (what aspects of the scene they are already aware of) to avoid unnecessary alerts. Moreover, collecting the data to train such an SA model is challenging: being an internal human cognitive state, driver SA is difficult to measure, and non-verbal signals such as eye gaze are some of the only outward manifestations of it. Traditional methods to obtain SA labels rely on probes that result in sparse, intermittent SA labels unsuitable for modeling a dense, temporally correlated process via machine learning. We propose a novel interactive labeling protocol that captures dense, continuous SA labels and use it to collect an object-level SA dataset in a VR driving simulator. Our dataset comprises 20 unique drivers' SA labels, driving data, and gaze (over 320 minutes of driving) which will be made public. Additionally, we train an SA model from this data, formulating the object-level driver SA prediction problem as a semantic segmentation problem. Our formulation allows all objects in a scene at a timestep to be processed simultaneously, leveraging global scene context and local gaze-object relationships together. Our experiments show that this formulation leads to improved performance over common sense baselines and prior art on the SA prediction task.
Modeling Drivers’ Situational Awareness from Eye Gaze for Driving Assistance
[ "Abhijat Biswas", "Pranay Gupta", "Shreeya Khurana", "David Held", "Henny Admoni" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://harplab.github.io/DriverSA
null
https://openreview.net/forum?id=MsCbbIqHRA
@inproceedings{ qian2024thinkgrasp, title={ThinkGrasp: A Vision-Language System for Strategic Part Grasping in Clutter}, author={Yaoyao Qian and Xupeng Zhu and Ondrej Biza and Shuo Jiang and Linfeng Zhao and Haojie Huang and Yu Qi and Robert Platt}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=MsCbbIqHRA} }
Robotic grasping in cluttered environments remains a significant challenge due to occlusions and complex object arrangements. We have developed ThinkGrasp, a plug-and-play vision-language grasping system that makes use of GPT-4o's advanced contextual reasoning for grasping strategies. ThinkGrasp can effectively identify and generate grasp poses for target objects, even when they are heavily obstructed or nearly invisible, by using goal-oriented language to guide the removal of obstructing objects. This approach progressively uncovers the target object and ultimately grasps it with a few steps and a high success rate. In both simulated and real experiments, ThinkGrasp achieved a high success rate and significantly outperformed state-of-the-art methods in heavily cluttered environments or with diverse unseen objects, demonstrating strong generalization capabilities.
ThinkGrasp: A Vision-Language System for Strategic Part Grasping in Clutter
[ "Yaoyao Qian", "Xupeng Zhu", "Ondrej Biza", "Shuo Jiang", "Linfeng Zhao", "Haojie Huang", "Yu Qi", "Robert Platt" ]
Conference
Poster
2407.11298
[ "https://github.com/H-Freax/ThinkGrasp" ]
https://huggingface.co/papers/2407.11298
1
5
2
8
[]
[]
[]
[]
[]
[]
1
https://h-freax.github.io/thinkgrasp_page/
null
https://openreview.net/forum?id=MfuzopqVOX
@inproceedings{ pan2024lidargrid, title={Li{DARG}rid: Self-supervised 3D Opacity Grid from Li{DAR} for Scene Forecasting}, author={Chuanyu Pan and Aolin Xu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=MfuzopqVOX} }
Timely capturing the dense geometry of the surrounding scene with unlabeled LiDAR data is valuable but under-explored for mobile robotic applications. Its value lies in the huge amount of such unlabeled data, enabling self-supervised learning for various downstream tasks. Current dynamic 3D scene reconstruction approaches however heavily rely on data annotations to tackle the moving objects in the scene. In response, we present LiDARGrid, a 3D opacity grid representation instantly derived from LiDAR points, which captures the dense 3D scene and facilitates scene forecasting. Our method features a novel self-supervised neural volume densification procedure based on an autoencoder and differentiable volume rendering. Leveraging this representation, self-supervised scene forecasting can be performed. Our method is trained on NuScenes dataset for autonomous driving, and is evaluated by predicting future point clouds using the scene forecasting. It notably outperforms state-of-the-art methods in point cloud forecasting in all performance metrics. Beyond scene forecasting, our representation excels in supporting additional tasks such as moving region detection and depth completion, as shown by experiments.
LiDARGrid: Self-supervised 3D Opacity Grid from LiDAR for Scene Forecasting
[ "Chuanyu Pan", "Aolin Xu" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=MfIUKzihC8
@inproceedings{ rowe2024ctrlsim, title={Ct{RL}-Sim: Reactive and Controllable Driving Agents with Offline Reinforcement Learning}, author={Luke Rowe and Roger Girgis and Anthony Gosselin and Bruno Carrez and Florian Golemo and Felix Heide and Liam Paull and Christopher Pal}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=MfIUKzihC8} }
Evaluating autonomous vehicle stacks (AVs) in simulation typically involves replaying driving logs from real-world recorded traffic. However, agents replayed from offline data are not reactive and hard to intuitively control. Existing approaches address these challenges by proposing methods that rely on heuristics or generative models of real-world data but these approaches either lack realism or necessitate costly iterative sampling procedures to control the generated behaviours. In this work, we take an alternative approach and propose CtRL-Sim, a method that leverages return-conditioned offline reinforcement learning to efficiently generate reactive and controllable traffic agents. Specifically, we process real-world driving data through a physics-enhanced Nocturne simulator to generate a diverse offline reinforcement learning dataset, annotated with various reward terms. We then train a return-conditioned multi-agent behaviour model that allows for fine-grained manipulation of agent behaviours by modifying the desired returns for the various reward components. This capability enables the generation of a wide range of driving behaviours beyond the scope of the initial dataset, including adversarial behaviours. We demonstrate that CtRL-Sim can generate diverse and realistic safety-critical scenarios while providing fine-grained control over agent behaviours.
CtRL-Sim: Reactive and Controllable Driving Agents with Offline Reinforcement Learning
[ "Luke Rowe", "Roger Girgis", "Anthony Gosselin", "Bruno Carrez", "Florian Golemo", "Felix Heide", "Liam Paull", "Christopher Pal" ]
Conference
Poster
2403.19918
[ "https://github.com/montrealrobotics/ctrl-sim/" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://montrealrobotics.ca/ctrlsim/
null
https://openreview.net/forum?id=M0JtsLuhEE
@inproceedings{ kim2024tsqnet, title={T\${\textasciicircum}2\${SQN}et: A Recognition Model for Manipulating Partially Observed Transparent Tableware Objects}, author={Young Hun Kim and Seungyeon Kim and Yonghyeon Lee and Frank C. Park}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=M0JtsLuhEE} }
Recognizing and manipulating transparent tableware from partial view RGB image observations is made challenging by the difficulty in obtaining reliable depth measurements of transparent objects. In this paper we present the Transparent Tableware SuperQuadric Network (T$^2$SQNet), a neural network model that leverages a family of newly extended deformable superquadrics to produce low-dimensional, instance-wise and accurate 3D geometric representations of transparent objects from partial views. As a byproduct and contribution of independent interest, we also present TablewareNet, a publicly available toolset of seven parametrized shapes based on our extended deformable superquadrics, that can be used to generate new datasets of tableware objects of diverse shapes and sizes. Experiments with T$^2$SQNet trained with TablewareNet show that T$^2$SQNet outperforms existing methods in recognizing transparent objects, in some cases by significant margins, and can be effectively used in robotic applications like decluttering and target retrieval.
T^2SQNet: A Recognition Model for Manipulating Partially Observed Transparent Tableware Objects
[ "Young Hun Kim", "Seungyeon Kim", "Yonghyeon Lee", "Frank C. Park" ]
Conference
Poster
[ "https://github.com/seungyeon-k/T2SQNet-public" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://t2sqnet.github.io/
null
https://openreview.net/forum?id=M0Gv07MUMU
@inproceedings{ tian2024tokenize, title={Tokenize the World into Object-level Knowledge to Address Long-tail Events in Autonomous Driving}, author={Thomas Tian and Boyi Li and Xinshuo Weng and Yuxiao Chen and Edward Schmerling and Yue Wang and Boris Ivanovic and Marco Pavone}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=M0Gv07MUMU} }
The autonomous driving industry is increasingly adopting end-to-end learning from sensory inputs to minimize human biases in system design. Traditional end-to-end driving models, however, suffer from long-tail events due to rare or unseen inputs within their training distributions. To address this, we propose TOKEN, a novel Multi-Modal Large Language Model (MM-LLM) that tokenizes the world into object-level knowledge, enabling better utilization of LLM’s reasoning capabilities to enhance autonomous vehicle planning in long-tail scenarios. TOKEN effectively alleviates data scarcity and inefficient tokenization by producing condensed and semantically enriched representations of the scene. Our results demonstrate that TOKEN excels in grounding, reasoning, and planning capabilities, outperforming existing frameworks with a 27% reduction in trajectory L2 error and a 39% decrease in collision rates in long-tail scenarios. Additionally, our work highlights the importance of representation alignment and structured reasoning in sparking the common-sense reasoning capabilities of MM-LLMs for effective planning.
Tokenize the World into Object-level Knowledge to Address Long-tail Events in Autonomous Driving
[ "Thomas Tian", "Boyi Li", "Xinshuo Weng", "Yuxiao Chen", "Edward Schmerling", "Yue Wang", "Boris Ivanovic", "Marco Pavone" ]
Conference
Poster
2407.00959
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://thomasrantian.github.io/TOKEN_MM-LLM_for_AutoDriving/
null
https://openreview.net/forum?id=LmOF7UAOZ7
@inproceedings{ ko2024a, title={A Planar-Symmetric {SO}(3) Representation for Learning Grasp Detection}, author={Tianyi Ko and Takuya Ikeda and Hiroya Sato and Koichi Nishiwaki}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=LmOF7UAOZ7} }
Planar-symmetric hands, such as parallel grippers, are widely adopted in both research and industrial fields. Their symmetry, however, introduces ambiguity and discontinuity in the SO(3) representation, which hinders both the training and inference of neural network-based grasp detectors. We propose a novel SO(3) representation that can parametrize a pair of planar-symmetric poses with a single parameter set by leveraging the 2D Bingham distribution. We also detail a grasp detector based on our representation, which provides a more consistent rotation output. An intensive evaluation with multiple grippers and objects in both the simulation and the real world quantitatively shows our approach's contribution.
A Planar-Symmetric SO(3) Representation for Learning Grasp Detection
[ "Tianyi Ko", "Takuya Ikeda", "Hiroya Sato", "Koichi Nishiwaki" ]
Conference
Poster
2410.04826
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Lixj7WEGEy
@inproceedings{ liu2024multibrain, title={Multi-Brain Collaborative Control for Quadruped Robots}, author={Hang Liu and Yi Cheng and Rankun Li and Xiaowen Hu and Linqi Ye and Houde Liu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=Lixj7WEGEy} }
In the field of locomotion task of quadruped robots, Blind Policy and Perceptive Policy each have their own advantages and limitations. The Blind Policy relies on preset sensor information and algorithms, suitable for known and structured environments, but it lacks adaptability in complex or unknown environments. The Perceptive Policy uses visual sensors to obtain detailed environmental information, allowing it to adapt to complex terrains, but its effectiveness is limited under occluded conditions, especially when perception fails. Unlike the Blind Policy, the Perceptive Policy is not as robust under these conditions. To address these challenges, we propose a MBC:Multi-Brain collaborative system that incorporates the concepts of Multi-Agent Reinforcement Learning and introduces collaboration between the Blind Policy and the Perceptive Policy. By applying this multi-policy collaborative model to a quadruped robot, the robot can maintain stable locomotion even when the perceptual system is impaired or observational data is incomplete. Our simulations and real-world experiments demonstrate that this system significantly improves the robot's passability and robustness against perception failures in complex environments, validating the effectiveness of multi-policy collaboration in enhancing robotic motion performance.
MBC: Multi-Brain Collaborative Control for Quadruped Robots
[ "Hang Liu", "Yi Cheng", "Rankun Li", "Xiaowen Hu", "Linqi Ye", "Houde Liu" ]
Conference
Poster
2409.16460
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://quad-mbc.github.io/
null
https://openreview.net/forum?id=LiwdXkMsDv
@inproceedings{ li2024uncertaintyaware, title={Uncertainty-Aware Decision Transformer for Stochastic Driving Environments}, author={Zenan Li and Fan Nie and Qiao Sun and Fang Da and Hang Zhao}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=LiwdXkMsDv} }
Offline Reinforcement Learning (RL) enables policy learning without active interactions, making it especially appealing for self-driving tasks. Recent successes of Transformers inspire casting offline RL as sequence modeling, which, however, fails in stochastic environments with incorrect assumptions that identical actions can consistently achieve the same goal. In this paper, we introduce an UNcertainty-awaRE deciSion Transformer (UNREST) for planning in stochastic driving environments without introducing additional transition or complex generative models. Specifically, UNREST estimates uncertainties by conditional mutual information between transitions and returns. Discovering 'uncertainty accumulation' and 'temporal locality' properties of driving environments, we replace the global returns in decision transformers with truncated returns less affected by environments to learn from actual outcomes of actions rather than environment transitions. We also dynamically evaluate uncertainty at inference for cautious planning. Extensive experiments demonstrate UNREST's superior performance in various driving scenarios and the power of our uncertainty estimation strategy.
Uncertainty-Aware Decision Transformer for Stochastic Driving Environments
[ "Zenan Li", "Fan Nie", "Qiao Sun", "Fang Da", "Hang Zhao" ]
Conference
Oral
2309.16397
[ "https://github.com/Emiyalzn/CoRL24-UNREST" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=LZh48DTg71
@inproceedings{ li2024evaluating, title={Evaluating Real-World Robot Manipulation Policies in Simulation}, author={Xuanlin Li and Kyle Hsu and Jiayuan Gu and Oier Mees and Karl Pertsch and Homer Rich Walke and Chuyuan Fu and Ishikaa Lunawat and Isabel Sieh and Sean Kirmani and Sergey Levine and Jiajun Wu and Chelsea Finn and Hao Su and Quan Vuong and Ted Xiao}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=LZh48DTg71} }
The field of robotics has made significant advances towards generalist robot manipulation policies. However, real-world evaluation of such policies is not scalable and faces reproducibility challenges, issues that are likely to worsen as policies broaden the spectrum of tasks they can perform. In this work, we demonstrate that simulation-based evaluation can be a scalable, reproducible, and reliable proxy for real-world evaluation. We identify control and visual disparities between real and simulated environments as key challenges for reliable simulated evaluation and propose approaches for mitigating these gaps without needing to painstakingly craft full-fidelity digital twins. We then employ these techniques to create SIMPLER, a collection of simulated environments for policy evaluation on common real robot manipulation setups. Through over 1500 paired sim-and-real evaluations of manipulation policies across two embodiments and eight task families, we demonstrate strong correlation between policy performance in SIMPLER environments and that in the real world. Beyond aggregated trends, we find that SIMPLER evaluations effectively reflect the real-world behaviors of individual policies, such as sensitivity to various distribution shifts. We are committed to open-sourcing all SIMPLER environments along with our workflow for creating new environments to facilitate research on general-purpose manipulation policies and simulated evaluation frameworks. Website: https://simpler-env.github.io/
Evaluating Real-World Robot Manipulation Policies in Simulation
[ "Xuanlin Li", "Kyle Hsu", "Jiayuan Gu", "Oier Mees", "Karl Pertsch", "Homer Rich Walke", "Chuyuan Fu", "Ishikaa Lunawat", "Isabel Sieh", "Sean Kirmani", "Sergey Levine", "Jiajun Wu", "Chelsea Finn", "Hao Su", "Quan Vuong", "Ted Xiao" ]
Conference
Poster
2405.05941
[ "https://github.com/simpler-env/SimplerEnv" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://simpler-env.github.io/
null
https://openreview.net/forum?id=L4p6zTlj6k
@inproceedings{ wu2024an, title={An Open-Source Low-Cost Holonomic Mobile Manipulator for Robot Learning}, author={Jimmy Wu and William Chong and Robert Holmberg and Aaditya Prasad and Oussama Khatib and Shuran Song and Szymon Rusinkiewicz and Jeannette Bohg}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=L4p6zTlj6k} }
Exploiting the promise of recent advances in imitation learning for mobile manipulation will require the collection of large numbers of human-guided demonstrations. This paper proposes an open-source design for an inexpensive, robust, and flexible mobile manipulator that can support arbitrary arms, enabling a wide range of real-world household mobile manipulation tasks. Crucially, our design uses powered casters to enable the mobile base to be fully holonomic, able to control all planar degrees of freedom independently and simultaneously. This feature makes the base more maneuverable and simplifies many mobile manipulation tasks, eliminating the kinematic constraints that create complex and time-consuming motions in nonholonomic bases. We equip our robot with an intuitive mobile phone teleoperation interface to enable easy data acquisition for imitation learning. In our experiments, we use this interface to collect data and show that the resulting learned policies can successfully perform a variety of common household mobile manipulation tasks.
TidyBot++: An Open-Source Holonomic Mobile Manipulator for Robot Learning
[ "Jimmy Wu", "William Chong", "Robert Holmberg", "Aaditya Prasad", "Yihuai Gao", "Oussama Khatib", "Shuran Song", "Szymon Rusinkiewicz", "Jeannette Bohg" ]
Conference
Poster
[ "https://github.com/jimmyyhwu/tidybot2" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
http://tidybot2.github.io
null
https://openreview.net/forum?id=Ke5xrnBFAR
@inproceedings{ nguyen2024gameplay, title={Gameplay Filters: Robust Zero-Shot Safety through Adversarial Imagination}, author={Duy Phuong Nguyen and Kai-Chieh Hsu and Jaime Fern{\'a}ndez Fisac and Jie Tan and Wenhao Yu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=Ke5xrnBFAR} }
Despite the impressive recent advances in learning-based robot control, ensuring robustness to out-of-distribution conditions remains an open challenge. Safety filters can, in principle, keep arbitrary control policies from incurring catastrophic failures by overriding unsafe actions, but existing solutions for complex (e.g., legged) robot dynamics do not span the full motion envelope and instead rely on local, reduced-order models. These filters tend to overly restrict agility and can still fail when perturbed away from nominal conditions. This paper presents the gameplay filter, a new class of predictive safety filter that continually plays out hypothetical matches between its simulation-trained safety strategy and a virtual adversary co-trained to invoke worst-case events and sim-to-real error, and precludes actions that would cause failures down the line. We demonstrate the scalability and robustness of the approach with a first-of-its-kind full-order safety filter for (36-D) quadrupedal dynamics. Physical experiments on two different quadruped platforms demonstrate the superior zero-shot effectiveness of the gameplay filter under large perturbations such as tugging and unmodeled terrain. Experiment videos and open-source software are available online: https://saferobotics.org/research/gameplay-filter
Gameplay Filters: Robust Zero-Shot Safety through Adversarial Imagination
[ "Duy Phuong Nguyen", "Kai-Chieh Hsu", "Wenhao Yu", "Jie Tan", "Jaime Fernández Fisac" ]
Conference
Oral
2405.00846
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://saferobotics.org/research/gameplay-filter
null
https://openreview.net/forum?id=KdVLK0Wo5z
@inproceedings{ zeng2024poliformer, title={PoliFormer: Scaling On-Policy {RL} with Transformers Results in Masterful Navigators}, author={Kuo-Hao Zeng and Kiana Ehsani and Rose Hendrix and Jordi Salvador and Zichen Zhang and Alvaro Herrasti and Ross Girshick and Aniruddha Kembhavi and Luca Weihs}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=KdVLK0Wo5z} }
We present PoliFormer (Policy Transformer), an RGB-only indoor navigation agent trained end-to-end with reinforcement learning at scale that generalizes to the real-world without adaptation despite being trained purely in simulation. PoliFormer uses a foundational vision transformer encoder with a causal transformer decoder enabling long-term memory and reasoning. It is trained for hundreds of millions of interactions across diverse environments, leveraging parallelized, multi-machine rollouts for efficient training with high throughput. PoliFormer is a masterful navigator, producing state-of-the-art results across two distinct embodiments, the LoCoBot and Stretch RE-1 robots, and four navigation benchmarks. It breaks through the plateaus of previous work, achieving an unprecedented 85.5% success rate in object goal navigation on the CHORES-S benchmark, a 28.5% absolute improvement. PoliFormer can also be trivially extended to a variety of downstream applications such as object tracking, multi-object navigation, and open-vocabulary navigation with no finetuning.
PoliFormer: Scaling On-Policy RL with Transformers Results in Masterful Navigators
[ "Kuo-Hao Zeng", "Zichen Zhang", "Kiana Ehsani", "Rose Hendrix", "Jordi Salvador", "Alvaro Herrasti", "Ross Girshick", "Aniruddha Kembhavi", "Luca Weihs" ]
Conference
Oral
2406.20083
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://poliformer.allen.ai
null
https://openreview.net/forum?id=KcW31O0PtL
@inproceedings{ ding2024hintad, title={Hint-{AD}: Holistically Aligned Interpretability in End-to-End Autonomous Driving}, author={Kairui Ding and Boyuan Chen and Yuchen Su and Huan-ang Gao and Bu Jin and Chonghao Sima and Xiaohui Li and Wuqiang Zhang and Paul Barsch and Hongyang Li and Hao Zhao}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=KcW31O0PtL} }
End-to-end architectures in autonomous driving (AD) face a significant challenge in interpretability, impeding human-AI trust. Human-friendly natural language has been explored for tasks such as driving explanation and 3D captioning. However, previous works primarily focused on the paradigm of declarative interpretability, where the natural language interpretations are not grounded in the intermediate outputs of AD systems, making the interpretations only declarative. In contrast, aligned interpretability establishes a connection between language and the intermediate outputs of AD systems. Here we introduce Hint-AD, an integrated AD-language system that generates language aligned with the holistic perception-prediction-planning outputs of the AD model. By incorporating the intermediate outputs and a holistic token mixer sub-network for effective feature adaptation, Hint-AD achieves desirable accuracy, achieving state-of-the-art results in driving language tasks including driving explanation, 3D dense captioning, and command prediction. To facilitate further study on driving explanation task on nuScenes, we also introduce a human-labeled dataset, Nu-X. Codes, dataset, and models are publicly available at https://anonymous.4open.science/r/Hint-AD-1385/.
Hint-AD: Holistically Aligned Interpretability in End-to-End Autonomous Driving
[ "Kairui Ding", "Boyuan Chen", "Yuchen Su", "Huan-ang Gao", "Bu Jin", "Chonghao Sima", "Xiaohui Li", "Wuqiang Zhang", "Paul Barsch", "Hongyang Li", "Hao Zhao" ]
Conference
Poster
2409.06702
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://air-discover.github.io/Hint-AD/
null
https://openreview.net/forum?id=KXsropnmNI
@inproceedings{ zhao2024transferable, title={Transferable Tactile Transformers for Representation Learning Across Diverse Sensors and Tasks}, author={Jialiang Zhao and Yuxiang Ma and Lirui Wang and Edward Adelson}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=KXsropnmNI} }
This paper presents T3: Transferable Tactile Transformers, a framework for tactile representation learning that scales across multi-sensors and multi-tasks.T3 is designed to overcome the contemporary issue that camera-based tactile sensing is extremely heterogeneous, i.e. sensors are built into different form factors, and existing datasets were collected for disparate tasks. T3 captures the shared latent information across different sensor-task pairings by constructing a shared trunk transformer with sensor-specific encoders and task-specific decoders. The pre-training of T3utilizes a novel Foundation Tactile (FoTa) dataset, which is aggregated from several open-sourced datasets and it contains over 3 million data points gathered from 13 sensors and 11 tasks. FoTa is the largest and most diverse dataset in tactile sensing to date and it is made publicly available in a unified format. Across various sensors and tasks, experiments show that T3 pre-trained with FoTa achieved zero-shot transferability in certain sensor-task pairings, can be further fine-tuned with small amounts of domain-specific data, and its performance scales with bigger network sizes. T3 is also effective as a tactile encoder for long horizon contact-rich manipulation. Results from sub-millimeter multi-pin electronics insertion tasks show that T3 achieved a task success rate 25% higher than that of policies trained with tactile encoders trained from scratch, or 53% higher than without tactile sensing. Data, code, and model checkpoints are open-sourced at https://t3.alanz.info.
Transferable Tactile Transformers for Representation Learning Across Diverse Sensors and Tasks
[ "Jialiang Zhao", "Yuxiang Ma", "Lirui Wang", "Edward Adelson" ]
Conference
Poster
2406.13640
[ "https://github.com/alanzjl/t3" ]
https://huggingface.co/papers/2406.13640
1
0
0
4
[]
[ "alanz-mit/FoundationTactile" ]
[]
[]
[ "alanz-mit/FoundationTactile" ]
[]
1
https://t3.alanz.info/
null
https://openreview.net/forum?id=KULBk5q24a
@inproceedings{ blumenkamp2024covisnet, title={CoViS-Net: A Cooperative Visual Spatial Foundation Model for Multi-Robot Applications}, author={Jan Blumenkamp and Steven Morad and Jennifer Gielis and Amanda Prorok}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=KULBk5q24a} }
Autonomous robot operation in unstructured environments is often underpinned by spatial understanding through vision. Systems composed of multiple concurrently operating robots additionally require access to frequent, accurate and reliable pose estimates. Classical vision-based methods to regress relative pose are commonly computationally expensive (precluding real-time applications), and often lack data-derived priors for resolving ambiguities. In this work, we propose CoViS-Net, a cooperative, multi-robot visual spatial foundation model that learns spatial priors from data, enabling pose estimation as well as general spatial comprehension. Our model is fully decentralized, platform-agnostic, executable in real-time using onboard compute, and does not require existing networking infrastructure. CoViS-Net provides relative pose estimates and a local bird's-eye-view (BEV) representation, even without camera overlap between robots, and can predict BEV representations of unseen regions. We demonstrate its use in a multi-robot formation control task across various real-world settings. We provide supplementary material online and will open source our trained model in due course. https://sites.google.com/view/covis-net
CoViS-Net: A Cooperative Visual Spatial Foundation Model for Multi-Robot Applications
[ "Jan Blumenkamp", "Steven Morad", "Jennifer Gielis", "Amanda Prorok" ]
Conference
Poster
2405.01107
[ "https://github.com/proroklab/CoViS-Net" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://proroklab.github.io/CoViS-Net/
null
https://openreview.net/forum?id=KPcX4jetMw
@inproceedings{ jin2024reasoning, title={Reasoning Grasping via Multimodal Large Language Model}, author={Shiyu Jin and JINXUAN XU and Yutian Lei and Liangjun Zhang}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=KPcX4jetMw} }
Despite significant progress in robotic systems for operation within human-centric environments, existing models still heavily rely on explicit human commands to identify and manipulate specific objects. This limits their effectiveness in environments where understanding and acting on implicit human intentions are crucial. In this study, we introduce a novel task: reasoning grasping, where robots need to generate grasp poses based on indirect verbal instructions or intentions. To accomplish this, we propose an end-to-end reasoning grasping model that integrates a multimodal Large Language Model (LLM) with a vision-based robotic grasping framework. In addition, we present the first reasoning grasping benchmark dataset generated from the GraspNet-1 billion, incorporating implicit instructions for object-level and part-level grasping, and this dataset will soon be available for public access. Our results show that directly integrating CLIP or LLaVA with the grasp detection model performs poorly on the challenging reasoning grasping tasks, while our proposed model demonstrates significantly enhanced performance both in the reasoning grasping benchmark and real-world experiments.
Reasoning Grasping via Multimodal Large Language Model
[ "Shiyu Jin", "JINXUAN XU", "Yutian Lei", "Liangjun Zhang" ]
Conference
Poster
2402.06798
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=KAzku0Uyh1
@inproceedings{ chen2024objectcentric, title={Object-Centric Dexterous Manipulation from Human Motion Data}, author={Yuanpei Chen and Chen Wang and Yaodong Yang and Karen Liu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=KAzku0Uyh1} }
Manipulating objects to achieve desired goal states is a basic but important skill for dexterous manipulation. Human hand motions demonstrate proficient manipulation capability, providing valuable data for training robots with multi-finger hands. Despite this potential, substantial challenges arise due to the embodiment gap between human and robot hands. In this work, we introduce a hierarchical policy learning framework that uses human hand motion data for training object-centric dexterous robot manipulation. At the core of our method is a high-level trajectory generative model, learned with a large-scale human hand motion capture dataset, to synthesize human-like wrist motions conditioned on the desired object goal states. Guided by the generated wrist motions, deep reinforcement learning is further used to train a low-level finger controller that is grounded in the robot's embodiment to physically interact with the object to achieve the goal. Through extensive evaluation across 10 household objects, our approach not only demonstrates superior performance but also showcases generalization capability to novel object geometries and goal states. Furthermore, we transfer the learned policies from simulation to a real-world bimanual dexterous robot system, further demonstrating its applicability in real-world scenarios. Project website: https://cypypccpy.github.io/obj-dex.github.io/.
Object-Centric Dexterous Manipulation from Human Motion Data
[ "Yuanpei Chen", "Chen Wang", "Yaodong Yang", "Karen Liu" ]
Conference
Poster
[ "" ]
https://huggingface.co/papers/2403.07788
0
0
0
6
[]
[ "chenwangj/DexCap-Data" ]
[]
[]
[ "chenwangj/DexCap-Data" ]
[]
1
https://cypypccpy.github.io/obj-dex.github.io/
null
https://openreview.net/forum?id=JZzaRY8m8r
@inproceedings{ lu2024koi, title={{KOI}: Accelerating Online Imitation Learning via Hybrid Key-state Guidance}, author={Jingxian Lu and Wenke Xia and Dong Wang and Zhigang Wang and Bin Zhao and Di Hu and Xuelong Li}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=JZzaRY8m8r} }
Online Imitation Learning methods struggle with the gap between extensive online exploration space and limited expert trajectories, which hinder efficient exploration due to inaccurate task-aware reward estimation. Inspired by the findings from cognitive neuroscience that task decomposition could facilitate cognitive processing for efficient learning, we hypothesize that an agent could estimate precise task-aware imitation rewards for efficient online exploration by decomposing the target task into the objectives of "what to do" and the mechanisms of "how to do". In this work, we introduce the hybrid Key-state guided Online Imitation (KOI) learning approach, which leverages the integration of semantic and motion key states as guidance for task-aware reward estimation. Initially, we utilize the visual-language models to segment the expert trajectory into semantic key states, indicating the objectives of "what to do". Within the intervals between semantic key states, optical flow is employed to capture motion key states to understand the process of "how to do". By integrating a thorough grasp of both semantic and motion key states, we refine the trajectory-matching reward computation, encouraging task-aware exploration for efficient online imitation learning. Our experiment results prove that our method is more sample efficient than previous state-of-the-art approaches in the Meta-World and LIBERO environments. We also conduct real-world robotic manipulation experiments to validate the efficacy of our method, demonstrating the practical applicability of our KOI method.
KOI: Accelerating Online Imitation Learning via Hybrid Key-state Guidance
[ "Jingxian Lu", "Wenke Xia", "Dong Wang", "Zhigang Wang", "Bin Zhao", "Di Hu", "Xuelong Li" ]
Conference
Poster
2408.02912
[ "https://github.com/GeWu-Lab/Keystate_Online_Imitation" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://gewu-lab.github.io/Keystate_Online_Imitation/
null
https://openreview.net/forum?id=JScswMfEQ0
@inproceedings{ xu2024mobility, title={Mobility {VLA}: Multimodal Instruction Navigation with Long-Context {VLM}s and Topological Graphs}, author={Zhuo Xu and Hao-Tien Lewis Chiang and Zipeng Fu and Mithun George Jacob and Tingnan Zhang and Tsang-Wei Edward Lee and Wenhao Yu and Connor Schenck and David Rendleman and Dhruv Shah and Fei Xia and Jasmine Hsu and Jonathan Hoech and Pete Florence and Sean Kirmani and Sumeet Singh and Vikas Sindhwani and Carolina Parada and Chelsea Finn and Peng Xu and Sergey Levine and Jie Tan}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=JScswMfEQ0} }
An elusive goal in navigation research is to build an intelligent agent that can understand multimodal instructions including natural language and image, and perform useful navigation. To achieve this, we study a widely useful category of navigation tasks we call Multimodal Instruction Navigation with demonstration Tours (MINT), in which the environment prior is provided through a previously recorded demonstration video. Recent advances in Vision Language Models (VLMs) have shown a promising path in achieving this goal as it demonstrates capabilities in perceiving and reasoning about multimodal inputs. However, VLMs are typically trained to predict textual output and it is an open research question about how to best utilize them in navigation. To solve MINT, we present Mobility VLA, a hierarchical Vision-Language-Action (VLA) navigation policy that combines the environment understanding and common sense reasoning power of long-context VLMs and a robust low-level navigation policy based on topological graphs. The high-level policy consists of a long-context VLM that takes the demonstration tour video and the multimodal user instruction as input to find the goal frame in the tour video. Next, a low-level policy uses the goal frame and an offline constructed topological graph to generate robot actions at every timestep. We evaluated Mobility VLA in a 836$m^2$ real world environment and show that Mobility VLA has a high end-to-end success rates on previously unsolved multimodal instructions such as ``Where should I return this?'' while holding a plastic bin.
Mobility VLA: Multimodal Instruction Navigation with Long-Context VLMs and Topological Graphs
[ "Zhuo Xu", "Hao-Tien Lewis Chiang", "Zipeng Fu", "Mithun George Jacob", "Tingnan Zhang", "Tsang-Wei Edward Lee", "Wenhao Yu", "Connor Schenck", "David Rendleman", "Dhruv Shah", "Fei Xia", "Jasmine Hsu", "Jonathan Hoech", "Pete Florence", "Sean Kirmani", "Sumeet Singh", "Vikas Sindhwani", "Carolina Parada", "Chelsea Finn", "Peng Xu", "Sergey Levine", "Jie Tan" ]
Conference
Poster
2407.07775
[ "" ]
https://huggingface.co/papers/2407.07775
0
3
2
22
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=IssXUYvVTg
@inproceedings{ jia2024mail, title={Ma{IL}: Improving Imitation Learning with Selective State Space Models}, author={Xiaogang Jia and Qian Wang and Atalay Donat and Bowen Xing and Ge Li and Hongyi Zhou and Onur Celik and Denis Blessing and Rudolf Lioutikov and Gerhard Neumann}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=IssXUYvVTg} }
This work introduces Mamba Imitation Learning (MaIL), a novel imitation learning (IL) architecture that offers a computationally efficient alternative to state-of-the-art (SoTA) Transformer policies. Transformer-based policies have achieved remarkable results due to their ability in handling human-recorded data with inherently non-Markovian behavior. However, their high performance comes with the drawback of large models that complicate effective training. While state space models (SSMs) have been known for their efficiency, they were not able to match the performance of Transformers. Mamba significantly improves the performance of SSMs and rivals against Transformers, positioning it as an appealing alternative for IL policies. MaIL leverages Mamba as a backbone and introduces a formalism that allows using Mamba in the encoder-decoder structure. This formalism makes it a versatile architecture that can be used as a standalone policy or as part of a more advanced architecture, such as a diffuser in the diffusion process. Extensive evaluations on the LIBERO IL benchmark and three real robot experiments show that MaIL: i) outperforms Transformers in all LIBERO tasks, ii) achieves good performance even with small datasets, iii) is able to effectively process multi-modal sensory inputs, iv) is more robust to input noise compared to Transformers.
MaIL: Improving Imitation Learning with Selective State Space Models
[ "Xiaogang Jia", "Qian Wang", "Atalay Donat", "Bowen Xing", "Ge Li", "Hongyi Zhou", "Onur Celik", "Denis Blessing", "Rudolf Lioutikov", "Gerhard Neumann" ]
Conference
Poster
[ "https://github.com/ALRhub/MaIL" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Isp19rFFV4
@inproceedings{ paudel2024multistrategy, title={Multi-Strategy Deployment-Time Learning and Adaptation for Navigation under Uncertainty}, author={Abhishek Paudel and Xuesu Xiao and Gregory J. Stein}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=Isp19rFFV4} }
We present an approach for performant point-goal navigation in unfamiliar partially-mapped environments. When deployed, our robot runs multiple strategies for deployment-time learning and visual domain adaptation in parallel and quickly selects the best-performing among them. Choosing between policies as they are learned or adapted between navigation trials requires continually updating estimates of their performance as they evolve. Leveraging recent work in model-based learning-informed planning under uncertainty, we determine lower bounds on the would-be performance of newly-updated policies on old trials without needing to re-deploy them. This information constrains and accelerates bandit-like policy selection, affording quick selection of the best-performing strategy shortly after it would start to yield good performance. We validate the effectiveness of our approach in simulated maze-like environments, showing improved navigation cost and cumulative regret versus existing baselines.
Multi-Strategy Deployment-Time Learning and Adaptation for Navigation under Uncertainty
[ "Abhishek Paudel", "Xuesu Xiao", "Gregory J. Stein" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=IsZb0wT3Kw
@inproceedings{ jain2024anavi, title={{ANAVI}: Audio Noise Awareness by Visual Interaction}, author={Vidhi Jain and Rishi Veerapaneni and Yonatan Bisk}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=IsZb0wT3Kw} }
We propose Audio Noise Awareness using Visuals of Indoors for NAVIgation for quieter robot path planning. While humans are naturally aware of the noise they make and its impact on those around them, robots currently lack this awareness. A key challenge in achieving audio awareness for robots is estimating how loud will the robot’s actions be at a listener’s location? Since sound depends upon the geometry and material composition of rooms, we train the robot to passively perceive loudness using visual observations of indoor environments. To this end, we generate data on how loud an `impulse' sounds at different listener locations in simulated homes, and train our Acoustic Noise Predictor (ANP). Next, we collect acoustic profiles corresponding to different actions for navigation. Unifying ANP with action acoustics, we demonstrate experiments with wheeled (Hello Robot Stretch) and legged (Unitree Go2) robots so that these robots adhere to the noise constraints of the environment. All simulated and real-world data, code and model checkpoints is released at https://anavi-corl24.github.io/.
ANAVI: Audio Noise Awareness using Visual of Indoor environments for NAVIgation
[ "Vidhi Jain", "Rishi Veerapaneni", "Yonatan Bisk" ]
Conference
Poster
[ "https://github.com/vidhiJain/anavi" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://anavi-corl24.github.io/
null
https://openreview.net/forum?id=InT87E5sr4
@inproceedings{ liang2024dreamitate, title={Dreamitate: Real-World Visuomotor Policy Learning via Video Generation}, author={Junbang Liang and Ruoshi Liu and Ege Ozguroglu and Sruthi Sudhakar and Achal Dave and Pavel Tokmakov and Shuran Song and Carl Vondrick}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=InT87E5sr4} }
A key challenge in manipulation is learning a policy that can robustly generalize to diverse visual environments. A promising mechanism for learning robust policies is to leverage video generative models, which are pretrained on large-scale datasets of internet videos. In this paper, we propose a visuomotor policy learning framework that fine-tunes a video diffusion model on human demonstrations of a given task. At test time, we generate an example of an execution of the task conditioned on images of a novel scene, and use this synthesized execution directly to control the robot. Our key insight is that using common tools allows us to effortlessly bridge the embodiment gap between the human hand and the robot manipulator. We evaluate our approach on 4 tasks of increasing complexity and demonstrate that capitalizing on internet-scale generative models allows the learned policy to achieve a significantly higher degree of generalization than existing behavior cloning approaches.
Dreamitate: Real-World Visuomotor Policy Learning via Video Generation
[ "Junbang Liang", "Ruoshi Liu", "Ege Ozguroglu", "Sruthi Sudhakar", "Achal Dave", "Pavel Tokmakov", "Shuran Song", "Carl Vondrick" ]
Conference
Poster
2406.16862
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://dreamitate.cs.columbia.edu/
null
https://openreview.net/forum?id=IcOrwlXzMi
@inproceedings{ xu2024vlmgrounder, title={{VLM}-Grounder: A {VLM} Agent for Zero-Shot 3D Visual Grounding}, author={Runsen Xu and Zhiwei Huang and Tai Wang and Yilun Chen and Jiangmiao Pang and Dahua Lin}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=IcOrwlXzMi} }
3D visual grounding is crucial for robots, requiring integration of natural language and 3D scene understanding. Traditional methods depend on supervised learning with 3D point clouds are limited by scarce datasets. Recently zero-shot methods leveraging LLMs have been proposed to address the data issue. While effective, these methods often miss detailed scene context, limiting their ability to handle complex queries. In this work, we present VLM-Grounder, a novel framework using vision-language models (VLMs) for zero-shot 3D visual grounding based solely on 2D images. VLM-Grounder dynamically stitches image sequences, employs a grounding and feedback scheme to find the target object, and uses a multi-view ensemble projection to accurately estimate 3D bounding boxes. Experiments on ScanRefer and Nr3D datasets show VLM-Grounder outperforms previous zero-shot methods, achieving 51.6\% [email protected] on ScanRefer and 48.0\% Acc on Nr3D, without relying on 3D geometry or object priors.
VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding
[ "Runsen Xu", "Zhiwei Huang", "Tai Wang", "Yilun Chen", "Jiangmiao Pang", "Dahua Lin" ]
Conference
Poster
2410.13860
[ "https://github.com/OpenRobotLab/VLM-Grounder" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://runsenxu.com/projects/VLM-Grounder
null
https://openreview.net/forum?id=HlxRd529nG
@inproceedings{ zhang2024fewshot, title={Few-shot Object Detection without Fine-tuning}, author={Xinyu Zhang and Yuhan Liu and Yuting Wang and Abdeslam Boularias}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=HlxRd529nG} }
Few-shot object detection aims at detecting novel categories given only a few example images. It is a basic skill for a robot to perform tasks in open environments. Recent methods focus on finetuning strategies, with complicated procedures that prohibit a wider application. In this paper, we introduce DE-ViT, a few-shot object detector without the need for finetuning. DE-ViT's novel architecture is based on a new region-propagation mechanism for localization. The propagated region masks are transformed into bounding boxes through a learnable spatial integral layer. Instead of training prototype classifiers, we propose to use prototypes to project ViT features into a subspace that is robust to overfitting on base classes. We evaluate DE-ViT on few-shot, and one-shot object detection benchmarks with Pascal VOC, COCO, and LVIS. DE-ViT establishes new state-of-the-art results on all benchmarks. Notably, for COCO, DE-ViT surpasses the few-shot SoTA by 15 mAP on 10-shot and 7.2 mAP on 30-shot and one-shot SoTA by 2.8 AP50. For LVIS, DE-ViT outperforms few-shot SoTA by 17 box APr. Further, we evaluate DE-ViT with a real robot by building a pick-and-place system for sorting novel objects based on example images. The videos of our robot demonstrations, the source code and the models of DE-ViT can be found at https://mlzxy.github.io/devit.
Detect Everything with Few Examples
[ "Xinyu Zhang", "Yuhan Liu", "Yuting Wang", "Abdeslam Boularias" ]
Conference
Poster
2309.12969
[ "http://github.com/mlzxy/devit" ]
https://huggingface.co/papers/2309.12969
0
0
0
3
[]
[]
[]
[]
[]
[]
1
https://mlzxy.github.io/devit
null
https://openreview.net/forum?id=GVX6jpZOhU
@inproceedings{ yuan2024robopoint, title={RoboPoint: A Vision-Language Model for Spatial Affordance Prediction in Robotics}, author={Wentao Yuan and Jiafei Duan and Valts Blukis and Wilbert Pumacay and Ranjay Krishna and Adithyavairavan Murali and Arsalan Mousavian and Dieter Fox}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=GVX6jpZOhU} }
From rearranging objects on a table to putting groceries into shelves, robots must plan precise action points to perform tasks accurately and reliably. In spite of the recent adoption of vision language models (VLMs) to control robot behavior, VLMs struggle to precisely articulate robot actions using language. We introduce an automatic synthetic data generation pipeline that instruction-tunes VLMs to robotic domains and needs. Using the pipeline, we train RoboPoint, a VLM that predicts image keypoint affordances given language instructions. Compared to alternative approaches, our method requires no real-world data collection or human demonstration, making it much more scalable to diverse environments and viewpoints. In addition, RoboPoint is a general model that enables several downstream applications such as robot navigation, manipulation, and augmented reality (AR) assistance. Our experiments demonstrate that RoboPoint outperforms state-of-the-art VLMs (GPT-4o) and visual prompting techniques (PIVOT) by 21.8% in the accuracy of predicting spatial affordance and by 30.5% in the success rate of downstream tasks. Anonymous project page: https://robopoint.github.io.
RoboPoint: A Vision-Language Model for Spatial Affordance Prediction in Robotics
[ "Wentao Yuan", "Jiafei Duan", "Valts Blukis", "Wilbert Pumacay", "Ranjay Krishna", "Adithyavairavan Murali", "Arsalan Mousavian", "Dieter Fox" ]
Conference
Poster
[ "https://github.com/wentaoyuan/RoboPoint" ]
https://huggingface.co/papers/2406.10721
0
0
0
8
[ "wentao-yuan/robopoint-v1-vicuna-v1.5-13b", "wentao-yuan/robopoint-v1-llama-2-13b", "wentao-yuan/robopoint-v1-vicuna-v1.5-13b-lora", "wentao-yuan/robopoint-v1-llama-2-13b-lora", "wentao-yuan/robopoint-v1-vicuna-v1.5-7b-lora", "wentao-yuan/robopoint-v1-llama-2-7b-lora" ]
[ "wentao-yuan/robopoint-data", "wentao-yuan/where2place" ]
[]
[ "wentao-yuan/robopoint-v1-vicuna-v1.5-13b", "wentao-yuan/robopoint-v1-llama-2-13b", "wentao-yuan/robopoint-v1-vicuna-v1.5-13b-lora", "wentao-yuan/robopoint-v1-llama-2-13b-lora", "wentao-yuan/robopoint-v1-vicuna-v1.5-7b-lora", "wentao-yuan/robopoint-v1-llama-2-7b-lora" ]
[ "wentao-yuan/robopoint-data", "wentao-yuan/where2place" ]
[]
1
https://robo-point.github.io
null
https://openreview.net/forum?id=GGuNkjQSrk
@inproceedings{ e{\ss}er2024action, title={Action Space Design in Reinforcement Learning for Robot Motor Skills}, author={Julian E{\ss}er and Gabriel B. Margolis and Oliver Urbann and S{\"o}ren Kerner and Pulkit Agrawal}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=GGuNkjQSrk} }
Practitioners often rely on intuition to select action spaces for learning. The choice can substantially impact final performance even when choosing among configuration-space representations such as joint position, velocity, and torque commands. We examine action space selection considering a wheeled-legged robot, a quadruped robot, and a simulated suite of locomotion, manipulation, and control tasks. We analyze the mechanisms by which action space can improve performance and conclude that the action space can influence learning performance substantially in a task-dependent way. Moreover, we find that much of the practical impact of action space selection on learning dynamics can be explained by improved policy initialization and behavior between timesteps.
Action Space Design in Reinforcement Learning for Robot Motor Skills
[ "Julian Eßer", "Gabriel B. Margolis", "Oliver Urbann", "Sören Kerner", "Pulkit Agrawal" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=G8UcwxNAoD
@inproceedings{ murray2024teaching, title={Teaching Robots with Show and Tell: Using Foundation Models to Synthesize Robot Policies from Language and Visual Demonstration}, author={Michael Murray and Abhishek Gupta and Maya Cakmak}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=G8UcwxNAoD} }
We introduce a modular, neuro-symbolic framework for teaching robots new skills through language and visual demonstration. Our approach, ShowTell, composes a mixture of foundation models to synthesize robot manipulation programs that are easy to interpret and generalize across a wide range of tasks and environments. ShowTell is designed to handle complex demonstrations involving high level logic such as loops and conditionals while being intuitive and natural for end-users. We validate this approach through a series of real-world robot experiments, showing that ShowTell out-performs a state-of-the-art baseline based on GPT4-V, on a variety of tasks, and that it is able to generalize to unseen environments and within category objects.
Teaching Robots with Show and Tell: Using Foundation Models to Synthesize Robot Policies from Language and Visual Demonstration
[ "Michael Murray", "Abhishek Gupta", "Maya Cakmak" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://robo-showtell.github.io
null
https://openreview.net/forum?id=G0jqGG8Tta
@inproceedings{ nakamura2024not, title={Not All Errors Are Made Equal: A Regret Metric for Detecting System-level Trajectory Prediction Failures}, author={Kensuke Nakamura and Thomas Tian and Andrea Bajcsy}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=G0jqGG8Tta} }
Robot decision-making increasingly relies on data-driven human prediction models when operating around people. While these models are known to mispredict in out-of-distribution interactions, only a subset of prediction errors impact downstream robot performance. We propose characterizing such ``system-level'' prediction failures via the mathematical notion of regret: high-regret interactions are precisely those in which mispredictions degraded closed-loop robot performance. We further introduce a probabilistic generalization of regret that calibrates failure detection across disparate deployment contexts and renders regret compatible with reward-based and reward-free (e.g., generative) planners. In simulated autonomous driving interactions, we showcase that our system-level failure metric can automatically mine for closed-loop human-robot interactions that state-of-the-art generative human predictors and robot planners struggle with. We further find that the very presence of high-regret data during human predictor fine-tuning is highly predictive of robot re-deployment performance improvements. Furthermore, fine-tuning with the informative but significantly smaller high-regret data (23% of deployment data) is competitive with fine-tuning on the full deployment dataset, indicating a promising avenue for efficiently mitigating system-level human-robot interaction failures.
Not All Errors Are Made Equal: A Regret Metric for Detecting System-level Trajectory Prediction Failures
[ "Kensuke Nakamura", "Thomas Tian", "Andrea Bajcsy" ]
Conference
Poster
2403.04745
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://cmu-intentlab.github.io/not-all-errors/
null
https://openreview.net/forum?id=FO6tePGRZj
@inproceedings{ fu2024mobile, title={Mobile {ALOHA}: Learning Bimanual Mobile Manipulation using Low-Cost Whole-Body Teleoperation}, author={Zipeng Fu and Tony Z. Zhao and Chelsea Finn}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=FO6tePGRZj} }
Imitation learning from human demonstrations has shown impressive performance in robotics. However, most results focus on table-top manipulation, lacking the mobility and dexterity necessary for generally useful tasks. In this work, we develop a system for imitating mobile manipulation tasks that are bimanual and require whole-body control. We first present Mobile ALOHA, a low-cost and whole-body teleoperation system for data collection. It augments the ALOHA system with a mobile base, and a whole-body teleoperation interface. Using data collected with Mobile ALOHA, we then perform supervised behavior cloning and find that co-training with existing static ALOHA datasets boosts performance on mobile manipulation tasks. With 50 demonstrations for each task, co-training can increase success rates by up to 90\%, allowing Mobile ALOHA to autonomously complete complex mobile manipulation tasks such as sauteing and serving a piece of shrimp, opening a two-door wall cabinet to store heavy cooking pots, calling and entering an elevator, and lightly rinsing a used pan using a kitchen faucet. We will open-source all the hardware and software implementations upon publication.
Mobile ALOHA: Learning Bimanual Mobile Manipulation using Low-Cost Whole-Body Teleoperation
[ "Zipeng Fu", "Tony Z. Zhao", "Chelsea Finn" ]
Conference
Poster
[ "https://github.com/MarkFzp/mobile-aloha" ]
https://huggingface.co/papers/2401.02117
0
30
3
3
[]
[]
[]
[]
[]
[]
1
https://mobile-aloha.github.io/
null
https://openreview.net/forum?id=FHnVRmeqxf
@inproceedings{ lin2024flowretrieval, title={FlowRetrieval: Flow-Guided Data Retrieval for Few-Shot Imitation Learning}, author={Li-Heng Lin and Yuchen Cui and Amber Xie and Tianyu Hua and Dorsa Sadigh}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=FHnVRmeqxf} }
Imitation learning policies in robotics tend to require an extensive amount of demonstrations. It is critical to develop few-shot adaptation strategies that rely only on a small amount of task-specific human demonstrations. Prior works focus on learning general policies from large scale dataset with diverse behaviors. Recent research has shown that directly retrieving relevant past experiences to augment policy learning has great promise in few-shot settings. However, existing data retrieval methods fall under two extremes: they either rely on the existence of exact same behaviors with visually similar scenes in the prior data, which is impractical to assume; or they retrieve based on semantic similarity of high-level language descriptions of the task, which might not be that informative about the shared behaviors or motions across tasks. In this work, we investigate how we can leverage motion similarity in the vast amount of cross-task data to improve few-shot imitation learning of the target task. Our key insight is that motion-similar data carry rich information about the effects of actions and object interactions that can be leveraged during few-shot adaptation. We propose FlowRetrieval, an approach that leverages optical flow representations for both extracting similar motions to target tasks from prior data, and for guiding learning of a policy that can maximally benefit from such data. Our results show FlowRetrieval significantly outperforms prior methods across simulated and real-world domains, achieving on average 27% higher success rate than the best retrieval-based prior method. In the Pen-in-Cup task with a real Franka Emika robot, FlowRetrieval achieves 3.7x the performance of the baseline learning from all prior and target data.
FlowRetrieval: Flow-Guided Data Retrieval for Few-Shot Imitation Learning
[ "Li-Heng Lin", "Yuchen Cui", "Amber Xie", "Tianyu Hua", "Dorsa Sadigh" ]
Conference
Poster
2408.16944
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://flow-retrieval.github.io
null
https://openreview.net/forum?id=F0rWEID2gb
@inproceedings{ liang2024environment, title={Environment Curriculum Generation via Large Language Models}, author={William Liang and Sam Wang and Hung-Ju Wang and Yecheng Jason Ma and Osbert Bastani and Dinesh Jayaraman}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=F0rWEID2gb} }
Recent work has demonstrated that a promising strategy for teaching robots a wide range of complex skills is by training them on a curriculum of progressively more challenging environments. However, developing an effective curriculum of environment distributions currently requires significant expertise, which must be repeated for every new domain. Our key insight is that environments are often naturally represented as code. Thus, we probe whether effective environment curriculum design can be achieved and automated via code generation by large language models (LLM). In this paper, we introduce Eurekaverse, an unsupervised environment design algorithm that uses LLMs to sample progressively more challenging, diverse, and learnable environments for skill training. We validate Eurekaverse's effectiveness in the domain of quadrupedal parkour learning, in which a quadruped robot must traverse through a variety of obstacle courses. The automatic curriculum designed by Eurekaverse enables gradual learning of complex parkour skills in simulation and can successfully transfer to the real-world, outperforming manual training courses designed by humans.
Environment Curriculum Generation via Large Language Models
[ "William Liang", "Sam Wang", "Hung-Ju Wang", "Osbert Bastani", "Dinesh Jayaraman", "Yecheng Jason Ma" ]
Conference
Oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://eureka-research.github.io/eurekaverse/
null
https://openreview.net/forum?id=EyEE7547vy
@inproceedings{ xiong2024eventdgs, title={Event3{DGS}: Event-Based 3D Gaussian Splatting for High-Speed Robot Egomotion}, author={Tianyi Xiong and Jiayi Wu and Botao He and Cornelia Fermuller and Yiannis Aloimonos and Heng Huang and Christopher Metzler}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=EyEE7547vy} }
By combining differentiable rendering with explicit point-based scene representations, 3D Gaussian Splatting (3DGS) has demonstrated breakthrough 3D reconstruction capabilities. However, to date 3DGS has had limited impact on robotics, where high-speed egomotion is pervasive: Egomotion introduces motion blur and leads to artifacts in existing frame-based 3DGS reconstruction methods. To address this challenge, we introduce Event3DGS, an event-based 3DGS framework. By exploiting the exceptional temporal resolution of event cameras, Event3GDS can reconstruct high-fidelity 3D structure and appearance under high-speed egomotion. Extensive experiments on multiple synthetic and real-world datasets demonstrate the superiority of Event3DGS compared with existing event-based dense 3D scene reconstruction frameworks; Event3DGS substantially improves reconstruction quality (+3dB) while reducing computational costs by 95\%. Our framework also allows one to incorporate a few motion-blurred frame-based measurements into the reconstruction process to further improve appearance fidelity without loss of structural accuracy.
Event3DGS: Event-Based 3D Gaussian Splatting for High-Speed Robot Egomotion
[ "Tianyi Xiong", "Jiayi Wu", "Botao He", "Cornelia Fermuller", "Yiannis Aloimonos", "Heng Huang", "Christopher Metzler" ]
Conference
Poster
2406.02972
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://tyxiong23.github.io/event3dgs
null
https://openreview.net/forum?id=EiqQEsOMZt
@inproceedings{ hou2024tamma, title={Ta{MM}a: Target-driven Multi-subscene Mobile Manipulation}, author={Jiawei Hou and Tianyu Wang and Tongying Pan and Shouyan Wang and Xiangyang Xue and Yanwei Fu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=EiqQEsOMZt} }
For everyday service robotics, the ability to navigate back and forth based on tasks in multi-subscene environments and perform delicate manipulations is crucial and highly practical. While existing robotics primarily focus on complex tasks within a single scene or simple tasks across scalable scenes individually, robots consisting of a mobile base with a robotic arm face the challenge of efficiently representing multiple subscenes, coordinating the collaboration between the mobile base and the robotic arm, and managing delicate tasks in scalable environments. To address this issue, we propose Target-driven Multi-subscene Mobile Manipulation (\textit{TaMMa}), which efficiently handles mobile base movement and fine-grained manipulation across subscenes. Specifically, we obtain a reliable 3D Gaussian initialization of the whole scene using a sparse 3D point cloud with encoded semantics. Through querying the coarse Gaussians, we acquire the approximate pose of the target, navigate the mobile base to approach it, and reduce the scope of precise target pose estimation to the corresponding subscene. Optimizing while moving, we employ diffusion-based depth completion to optimize fine-grained Gaussians and estimate the target's refined pose. For target-driven manipulation, we adopt Gaussians inpainting to obtain precise poses for the origin and destination of the operation in a \textit{think before you do it} manner, enabling fine-grained manipulation. We conduct various experiments on a real robotic to demonstrate our method in effectively and efficiently achieving precise operation tasks across multiple tabletop subscenes.
TaMMa: Target-driven Multi-subscene Mobile Manipulation
[ "Jiawei Hou", "Tianyu Wang", "Tongying Pan", "Shouyan Wang", "Xiangyang Xue", "Yanwei Fu" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=EifoVoIyd5
@inproceedings{ wilson2024what, title={What Matters in Range View 3D Object Detection}, author={Benjamin Wilson and Nicholas Autio Mitchell and Jhony Kaesemodel Pontes and James Hays}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=EifoVoIyd5} }
Lidar-based perception pipelines rely on 3D object detection models to interpret complex scenes. While multiple representations for lidar exist, the range view is enticing since it losslessly encodes the entire lidar sensor output. In this work, we achieve state-of-the-art amongst range view 3D object detection models without using multiple techniques proposed in past range view literature. We explore range view 3D object detection across two modern datasets with substantially different properties: Argoverse 2 and Waymo Open. Our investigation reveals key insights: (1) input feature dimensionality significantly influences the overall performance, (2) surprisingly, employing a classification loss grounded in 3D spatial proximity works as well or better compared to more elaborate IoU-based losses, and (3) addressing non-uniform lidar density via a straightforward range subsampling technique outperforms existing multi-resolution, range-conditioned networks. Our experiments reveal that techniques proposed in recent range view literature are not needed to achieve state-of-the-art performance. Combining the above findings, we establish a new state-of-the-art model for range view 3D object detection — improving AP by 2.2% on the Waymo Open dataset while maintaining a runtime of 10 Hz. We are the first to benchmark a range view model on the Argoverse 2 dataset and outperform strong voxel-based baselines. All models are multi-class and open-source. Code is available at https://github.com/benjaminrwilson/range-view-3d-detection.
What Matters in Range View 3D Object Detection
[ "Benjamin Wilson", "Nicholas Autio Mitchell", "Jhony Kaesemodel Pontes", "James Hays" ]
Conference
Poster
2407.16789
[ "https://github.com/benjaminrwilson/range-view-3d-detection" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=EdVNB2kHv1
@inproceedings{ blank2024scaling, title={Scaling Robot Policy Learning via Zero-Shot Labeling with Foundation Models}, author={Nils Blank and Moritz Reuss and Marcel R{\"u}hle and {\"O}mer Erdin{\c{c}} Ya{\u{g}}murlu and Fabian Wenzel and Oier Mees and Rudolf Lioutikov}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=EdVNB2kHv1} }
A central challenge towards developing robots that can relate human language to their perception and actions is the scarcity of natural language annotations in diverse robot datasets. Moreover, robot policies that follow natural language instructions are typically trained on either templated language or expensive human-labeled instructions, hindering their scalability. To this end, we introduce NILS: Natural language Instruction Labeling for Scalability. NILS automatically labels uncurated, long-horizon robot data at scale in a zero-shot manner without any human intervention. NILS combines pre-trained vision-language foundation models in a sophisticated, carefully considered manner in order to detect objects in a scene, detect object-centric changes, segment tasks from large datasets of unlabelled interaction data and ultimately label behavior datasets. Evaluations on BridgeV2 and a kitchen play dataset show that NILS is able to autonomously annotate diverse robot demonstrations of unlabeled and unstructured datasets, while alleviating several shortcomings of crowdsourced human annotations.
Scaling Robot Policy Learning via Zero-Shot Labeling with Foundation Models
[ "Nils Blank", "Moritz Reuss", "Marcel Rühle", "Ömer Erdinç Yağmurlu", "Fabian Wenzel", "Oier Mees", "Rudolf Lioutikov" ]
Conference
Poster
2410.17772
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
http://robottasklabeling.github.io/
null
https://openreview.net/forum?id=EPujQZWemk
@inproceedings{ wang2024viper, title={Vi{PER}: Visibility-based Pursuit-Evasion via Reinforcement Learning}, author={Yizhuo Wang and Yuhong Cao and Jimmy Chiun and Subhadeep Koley and Mandy Pham and Guillaume Adrien Sartoretti}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=EPujQZWemk} }
In visibility-based pursuit-evasion tasks, a team of mobile pursuer robots with limited sensing capabilities is tasked with detecting all evaders in a multiply-connected planar environment, whose map may or may not be known to pursuers beforehand. This requires tight coordination among multiple agents to ensure that the omniscient and potentially arbitrarily fast evaders are guaranteed to be detected by the pursuers. Whereas existing methods typically rely on a relatively large team of agents to clear the environment, we propose ViPER, a neural solution that leverages a graph attention network to learn a coordinated yet distributed policy via multi-agent reinforcement learning (MARL). We experimentally demonstrate that ViPER significantly outperforms other state-of-the-art non-learning planners, showcasing its emergent coordinated behaviors and adaptability to more challenging scenarios and various team sizes, and finally deploy its learned policies on hardware in an aerial search task.
ViPER: Visibility-based Pursuit-Evasion via Reinforcement Learning
[ "Yizhuo Wang", "Yuhong Cao", "Jimmy Chiun", "Subhadeep Koley", "Mandy Pham", "Guillaume Adrien Sartoretti" ]
Conference
Poster
[ "https://github.com/marmotlab/ViPER" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=EM0wndCeoD
@inproceedings{ chernyadev2024bigym, title={BiGym: A Demo-Driven Mobile Bi-Manual Manipulation Benchmark}, author={Nikita Chernyadev and Nicholas Backshall and Xiao Ma and Yunfan Lu and Younggyo Seo and Stephen James}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=EM0wndCeoD} }
We introduce BiGym, a new benchmark and learning environment for mobile bi-manual demo-driven robotic manipulation. BiGym features 40 diverse tasks set in home environments, ranging from simple target reaching to complex kitchen cleaning. To capture the real-world performance accurately, we provide human-collected demonstrations for each task, reflecting the diverse modalities found in real-world robot trajectories. BiGym supports a variety of observations, including proprioceptive data and visual inputs such as RGB, and depth from 3 camera views. To validate the usability of BiGym, we thoroughly benchmark the state-of-the-art imitation learning algorithms and demo-driven reinforcement learning algorithms within the environment and discuss the future opportunities.
BiGym: A Demo-Driven Mobile Bi-Manual Manipulation Benchmark
[ "Nikita Chernyadev", "Nicholas Backshall", "Xiao Ma", "Yunfan Lu", "Younggyo Seo", "Stephen James" ]
Conference
Poster
2407.07788
[ "https://github.com/chernyadev/bigym" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://chernyadev.github.io/bigym/
null
https://openreview.net/forum?id=E4K3yLQQ7s
@inproceedings{ he2024visual, title={Visual Manipulation with Legs}, author={Xialin He and Chengjing Yuan and Wenxuan Zhou and Ruihan Yang and David Held and Xiaolong Wang}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=E4K3yLQQ7s} }
Animals have the ability to use their arms and legs for both locomotion and manipulation. We envision quadruped robots to have the same versatility. This work presents a system that empowers a quadruped robot to perform object interactions with its legs, drawing inspiration from non-prehensile manipulation techniques. The proposed system has two main components: a visual manipulation policy module and a loco-manipulator module. The visual manipulation policy module decides how the leg should interact with the object, trained with reinforcement learning (RL) with point cloud observations and object-centric actions. The loco-manipulator controller controls the leg movements and body pose adjustments, implemented based on impedance control and Model Predictive Control (MPC). Besides manipulating objects with a single leg, the proposed system can also select from left or right legs based on the critic maps and move the object to distant goals through robot base adjustment. In the experiments, we evaluate the proposed system with the object pose alignment tasks both in simulation and in the real world, demonstrating object manipulation skills with legs more versatile than previous work.
Visual Manipulation with Legs
[ "Xialin He", "Chengjing Yuan", "Wenxuan Zhou", "Ruihan Yang", "David Held", "Xiaolong Wang" ]
Conference
Poster
2410.11345
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://legged-manipulation.github.io/
null
https://openreview.net/forum?id=DsFQg0G4Xu
@inproceedings{ cie{\'s}lar2024learning, title={Learning Long-Horizon Action Dependencies in Sampling-Based Bilevel Planning}, author={Bart{\l}omiej Cie{\'s}lar and Leslie Pack Kaelbling and Tom{\'a}s Lozano-P{\'e}rez and Jorge Mendez-Mendez}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=DsFQg0G4Xu} }
Autonomous robots will need the ability to make task and motion plans that involve long sequences of actions, e.g. to prepare a meal. One challenge is that the feasibility of actions late in the plan may depend on much earlier actions. This issue is exacerbated if these dependencies exist at a purely geometric level, making them difficult to express for a task planner. Backtracking is a common technique to resolve such geometric dependencies, but its time complexity limits its applicability to short-horizon dependencies. We propose an approach to account for these dependencies by learning a search heuristic for task and motion planning. We evaluate our approach on five quasi-static simulated domains and show a substantial improvement in success rate over the baselines.
Learning Long-Horizon Action Dependencies in Sampling-Based Bilevel Planning
[ "Bartłomiej Cieślar", "Leslie Pack Kaelbling", "Tomás Lozano-Pérez", "Jorge Mendez-Mendez" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Dftu4r5jHe
@inproceedings{ su2024contextaware, title={Context-Aware Replanning with Pre-Explored Semantic Map for Object Navigation}, author={Hung-Ting Su and CY Chen and Po-Chen Ko and Jia-Fong Yeh and Min Sun and Winston H. Hsu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=Dftu4r5jHe} }
Pre-explored Semantic Map, constructed through prior exploration using visual language models (VLMs), has proven effective as a foundational element for training-free robotic applications. However, existing approaches assume the map's accuracy and do not provide effective mechanisms for revising decisions based on incorrect maps. This work introduces Context-Aware Replanning (CARe),, which estimates map uncertainty through confidence scores and multi-view consistency, enabling the agent to revise erroneous decisions stemming from inaccurate maps without additional labels. We demonstrate the effectiveness of our proposed method using two modern map backbones, VLMaps and OpenMask3D, and show significant improvements in performance on object navigation tasks.
Context-Aware Replanning with Pre-Explored Semantic Map for Object Navigation
[ "Hung-Ting Su", "CY Chen", "Po-Chen Ko", "Jia-Fong Yeh", "Min Sun", "Winston H. Hsu" ]
Conference
Poster
2409.04837
[ "https://github.com/CARe-maps/CARe_experiments" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://care-maps.github.io/
null
https://openreview.net/forum?id=DSdAEsEGhE
@inproceedings{ chane-sane2024soloparkour, title={SoloParkour: Constrained Reinforcement Learning for Visual Locomotion from Privileged Experience}, author={Elliot Chane-Sane and Joseph Amigo and Thomas Flayols and Ludovic Righetti and Nicolas Mansard}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=DSdAEsEGhE} }
Parkour poses a significant challenge for legged robots, requiring navigation through complex environments with agility and precision based on limited sensory inputs. In this work, we introduce a novel method for training end-to-end visual policies, from depth pixels to robot control commands, to achieve agile and safe quadruped locomotion. We formulate robot parkour as a constrained reinforcement learning (RL) problem designed to maximize the emergence of agile skills within the robot's physical limits while ensuring safety. We first train a policy without vision using privileged information about the robot's surroundings. We then generate experience from this privileged policy to warm-start a sample efficient off-policy RL algorithm from depth images. This allows the robot to adapt behaviors from this privileged experience to visual locomotion while circumventing the high computational costs of RL directly from pixels. We demonstrate the effectiveness of our method on a real Solo-12 robot, showcasing its capability to perform a variety of parkour skills such as walking, climbing, leaping, and crawling.
SoloParkour: Constrained Reinforcement Learning for Visual Locomotion from Privileged Experience
[ "Elliot Chane-Sane", "Joseph Amigo", "Thomas Flayols", "Ludovic Righetti", "Nicolas Mansard" ]
Conference
Poster
2409.13678
[ "https://github.com/Gepetto/SoloParkour" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://gepetto.github.io/SoloParkour/
null
https://openreview.net/forum?id=DDIoRSh8ID
@inproceedings{ liu2024multitask, title={Multi-Task Interactive Robot Fleet Learning with Visual World Models}, author={Huihan Liu and Yu Zhang and Vaarij Betala and Evan Zhang and James Liu and Crystal Ding and Yuke Zhu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=DDIoRSh8ID} }
Recent advancements in large-scale multi-task robot learning offer the potential for deploying robot fleets in household and industrial settings, enabling them to perform diverse tasks across various environments. However, AI-enabled robots often face challenges with generalization and robustness when exposed to real-world variability and uncertainty. We introduce Sirius-Fleet, a multi-task interactive robot fleet learning framework to address these challenges. Sirius-Fleet monitors robot performance during deployment and involves humans to correct the robot's actions when necessary. We employ a visual world model to predict the outcomes of future actions and build anomaly predictors to predict whether they will likely result in anomalies. As the robot autonomy improves, the anomaly predictors automatically adapt their prediction criteria, leading to fewer requests for human intervention and gradually reducing human workload over time. Evaluations on large-scale benchmarks demonstrate Sirius-Fleet's effectiveness in improving multi-task policy performance and monitoring accuracy. We demonstrate Sirius-Fleet's performance in both RoboCasa in simulation and Mutex in the real world, two diverse, large-scale multi-task benchmarks. More information is available on the project website: https://ut-austin-rpl.github.io/sirius-fleet
Multi-Task Interactive Robot Fleet Learning with Visual World Models
[ "Huihan Liu", "Yu Zhang", "Vaarij Betala", "Evan Zhang", "James Liu", "Crystal Ding", "Yuke Zhu" ]
Conference
Poster
[ "" ]
https://huggingface.co/papers/2310.01362
2
0
0
5
[]
[]
[]
[]
[]
[]
1
https://ut-austin-rpl.github.io/sirius-fleet/
null
https://openreview.net/forum?id=Czs2xH9114
@inproceedings{ zhang2024wococo, title={WoCoCo: Learning Whole-Body Humanoid Control with Sequential Contacts}, author={Chong Zhang and Wenli Xiao and Tairan He and Guanya Shi}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=Czs2xH9114} }
Humanoid activities involving sequential contacts are crucial for complex robotic interactions and operations in the real world and are traditionally solved by model-based motion planning, which is time-consuming and often relies on simplified dynamics models. Although model-free reinforcement learning (RL) has become a powerful tool for versatile and robust whole-body humanoid control, it still requires tedious task-specific tuning and state machine design and suffers from long-horizon exploration issues in tasks involving contact sequences. In this work, we propose WoCoCo (Whole-Body Control with Sequential Contacts), a unified framework to learn whole-body humanoid control with sequential contacts by naturally decomposing the tasks into separate contact stages. Such decomposition facilitates simple and general policy learning pipelines through task-agnostic reward and sim-to-real designs, requiring only one or two task-related terms to be specified for each task. We demonstrated that end-to-end RL-based controllers trained with WoCoCo enable four challenging whole-body humanoid tasks involving diverse contact sequences in the real world without any motion priors: 1) versatile parkour jumping, 2) box loco-manipulation, 3) dynamic clap-and-tap dancing, and 4) cliffside climbing. We further show that WoCoCo is a general framework beyond humanoid by applying it in 22-DoF dinosaur robot loco-manipulation tasks. Website: lecar-lab.github.io/wococo/.
WoCoCo: Learning Whole-Body Humanoid Control with Sequential Contacts
[ "Chong Zhang", "Wenli Xiao", "Tairan He", "Guanya Shi" ]
Conference
Oral
2406.06005
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://lecar-lab.github.io/wococo/
null
https://openreview.net/forum?id=CskuWHDBAr
@inproceedings{ zhuang2024enhancing, title={Enhancing Visual Domain Robustness in Behaviour Cloning via Saliency-Guided Augmentation}, author={Zheyu Zhuang and RUIYU WANG and Nils Ingelhag and Ville Kyrki and Danica Kragic}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=CskuWHDBAr} }
In vision-based behaviour cloning (BC), traditional image-level augmentation methods such as pixel shifting enhance in-domain performance but often struggle with visual domain shifts, including distractors, occlusion, and changes in lighting and backgrounds. Conversely, superimposition-based augmentation, proven effective in computer vision, improves model generalisability by blending training images and out-of-domain images. Despite its potential, the applicability of these methods to vision-based BC remains unclear due to the unique challenges posed by BC demonstrations; specifically, preserving task-critical scene semantics, spatial-temporal relationships, and agent-target interactions is crucial. To address this, we introduce RoboSaGA, a context-aware approach that dynamically adjusts augmentation intensity per pixel based on input saliency derived from the policy. This method ensures aggressive augmentation within task-trivial areas without compromising task-critical information. Furthermore, RoboSaGA seamlessly integrates into existing network architectures without the need for structural changes or additional learning objectives. Our empirical evaluations across both simulated and real-world settings demonstrate that RoboSaGA not only maintains in-domain performance but significantly improves resilience to distractors and background variations.
Enhancing Visual Domain Robustness in Behaviour Cloning via Saliency-Guided Augmentation
[ "Zheyu Zhuang", "RUIYU WANG", "Nils Ingelhag", "Ville Kyrki", "Danica Kragic" ]
Conference
Poster
[ "https://github.com/Zheyu-Zhuang/RoboSaGA" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=CpXiqz6qf4
@inproceedings{ liu2024sonicsense, title={SonicSense: Object Perception from In-Hand Acoustic Vibration}, author={Jiaxun Liu and Boyuan Chen}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=CpXiqz6qf4} }
We introduce SonicSense, a holistic design of hardware and software to enable rich robot object perception through in-hand acoustic vibration sensing. While previous studies have shown promising results with acoustic sensing for object perception, current solutions are constrained to a handful of objects with simple geometries and homogeneous materials, single-finger sensing, and mixing training and testing on the same objects. SonicSense enables container inventory status differentiation, heterogeneous material prediction, 3D shape reconstruction, and object re-identification from a diverse set of 83 real-world objects. Our system employs a simple but effective heuristic exploration policy to interact with the objects as well as end-to-end learning-based algorithms to fuse vibration signals to infer object properties. Our framework underscores the significance of in-hand acoustic vibration sensing in advancing robot tactile perception.
SonicSense: Object Perception from In-Hand Acoustic Vibration
[ "Jiaxun Liu", "Boyuan Chen" ]
Conference
Poster
2406.17932
[ "https://github.com/generalroboticslab/SonicSense?tab=readme-ov-file" ]
https://huggingface.co/papers/2406.17932
1
1
0
2
[]
[]
[]
[]
[]
[]
1
http://www.generalroboticslab.com/blogs/blog/2024-06-26-sonicsense/index.html
null
https://openreview.net/forum?id=CPQW5kc0pe
@inproceedings{ liu2024voxactb, title={VoxAct-B: Voxel-Based Acting and Stabilizing Policy for Bimanual Manipulation}, author={I-Chun Arthur Liu and Sicheng He and Daniel Seita and Gaurav S. Sukhatme}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=CPQW5kc0pe} }
Bimanual manipulation is critical to many robotics applications. In contrast to single-arm manipulation, bimanual manipulation tasks are challenging due to higher-dimensional action spaces. Prior works leverage large amounts of data and primitive actions to address this problem, but may suffer from sample inefficiency and limited generalization across various tasks. To this end, we propose VoxAct-B, a language-conditioned, voxel-based method that leverages Vision Language Models (VLMs) to prioritize key regions within the scene and reconstruct a voxel grid. We provide this voxel grid to our bimanual manipulation policy to learn acting and stabilizing actions. This approach enables more efficient policy learning from voxels and is generalizable to different tasks. In simulation, we show that VoxAct-B outperforms strong baselines on fine-grained bimanual manipulation tasks. Furthermore, we demonstrate VoxAct-B on real-world $\texttt{Open Drawer}$ and $\texttt{Open Jar}$ tasks using two UR5s. Code, data, and videos are available at https://voxact-b.github.io.
VoxAct-B: Voxel-Based Acting and Stabilizing Policy for Bimanual Manipulation
[ "I-Chun Arthur Liu", "Sicheng He", "Daniel Seita", "Gaurav S. Sukhatme" ]
Conference
Poster
2407.04152
[ "https://github.com/VoxAct-B/voxactb" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://voxact-b.github.io/