bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
240
646
abstract
stringlengths
653
2.03k
title
stringlengths
25
127
authors
sequencelengths
2
22
id
stringclasses
1 value
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
35 values
n_linked_authors
int64
-1
7
upvotes
int64
-1
45
num_comments
int64
-1
3
n_authors
int64
-1
22
Models
sequencelengths
0
6
Datasets
sequencelengths
0
2
Spaces
sequencelengths
0
0
old_Models
sequencelengths
0
6
old_Datasets
sequencelengths
0
2
old_Spaces
sequencelengths
0
0
paper_page_exists_pre_conf
int64
0
1
project_page
stringlengths
0
89
null
https://openreview.net/forum?id=zr2GPi3DSb
@inproceedings{ jacob2024gentle, title={Gentle Manipulation of Tree Branches: A Contact-Aware Policy Learning Approach}, author={Jay Jacob and Shizhe Cai and Paulo Vinicius Koerich Borges and Tirthankar Bandyopadhyay and Fabio Ramos}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=zr2GPi3DSb} }
Learning to interact with deformable tree branches with minimal damage is challenging due to their intricate geometry and inscrutable dynamics. Furthermore, traditional vision-based modelling systems suffer from implicit occlusions in dense foliage, severely changing lighting conditions, and limited field of view, in addition to having a significant computational burden preventing real-time deployment.In this work, we simulate a procedural forest with realistic, self-similar branching structures derived from a parametric L-system model, actuated with crude spring abstractions, mirroring real-world variations with domain randomisation over the morphological and dynamic attributes. We then train a novel Proprioceptive Contact-Aware Policy (PCAP) for a reach task using reinforcement learning, aided by a whole-arm contact detection classifier and reward engineering, without external vision, tactile, or torque sensing. The agent deploys novel strategies to evade and mitigate contact impact, favouring a reactive exploration of the task space. Finally, we demonstrate that the learned behavioural patterns can be transferred zero-shot from simulation to real, allowing the arm to navigate around real branches with unseen topology and variable occlusions while minimising the contact forces and expected ruptures.
Gentle Manipulation of Tree Branches: A Contact-Aware Policy Learning Approach
[ "Jay Jacob", "Shizhe Cai", "Paulo Vinicius Koerich Borges", "Tirthankar Bandyopadhyay", "Fabio Ramos" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sites.google.com/view/pcap/home
null
https://openreview.net/forum?id=zeYaLS2tw5
@inproceedings{ wang2024sparse, title={Sparse Diffusion Policy: A Sparse, Reusable, and Flexible Policy for Robot Learning}, author={Yixiao Wang and Yifei Zhang and Mingxiao Huo and Thomas Tian and Xiang Zhang and Yichen Xie and Chenfeng Xu and Pengliang Ji and Wei Zhan and Mingyu Ding and Masayoshi Tomizuka}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=zeYaLS2tw5} }
The increasing complexity of tasks in robotics demands efficient strategies for multitask and continual learning. Traditional models typically rely on a universal policy for all tasks, facing challenges such as high computational costs and catastrophic forgetting when learning new tasks. To address these issues, we introduce a sparse, reusable, and flexible policy, Sparse Diffusion Policy (SDP). By adopting Mixture of Experts (MoE) within a transformer-based diffusion policy, SDP selectively activates experts and skills, enabling task-specific learning without retraining the entire model. It not only reduces the burden of active parameters but also facilitates the seamless integration and reuse of experts across various tasks. Extensive experiments on diverse tasks in both simulators and the real world show that SDP 1) excels in multitask scenarios with negligible increases in active parameters, 2) prevents forgetting in continual learning new tasks, and 3) enables efficient task transfer, offering a promising solution for advanced robotic applications. More demos and codes can be found on our https://anonymous.4open.science/w/sparse_diffusion_policy-24E7/.
Sparse Diffusion Policy: A Sparse, Reusable, and Flexible Policy for Robot Learning
[ "Yixiao Wang", "Yifei Zhang", "Mingxiao Huo", "Thomas Tian", "Xiang Zhang", "Yichen Xie", "Chenfeng Xu", "Pengliang Ji", "Wei Zhan", "Mingyu Ding", "Masayoshi Tomizuka" ]
Conference
Poster
[ "https://github.com/AnthonyHuo/SDP" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://forrest-110.github.io/sparse_diffusion_policy/
null
https://openreview.net/forum?id=zIWu9Kmlqk
@inproceedings{ hirose2024lelan, title={LeLaN: Learning A Language-conditioned Navigation Policy from In-the-Wild Video}, author={Noriaki Hirose and Ajay Sridhar and Catherine Glossop and Oier Mees and Sergey Levine}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=zIWu9Kmlqk} }
We present our method, LeLaN, which uses action-free egocentric data to learn robust language-conditioned object navigation. By leveraging the knowledge of large vision and language models and grounding this knowledge using pre-trained segmentation and depth estimation models, we can label in-the-wild data from a variety of indoor and outdoor environments with diverse instructions that capture a range of objects with varied granularity and noise in their descriptions. Leveraging this method to label over 50 hours of data collected in indoor and outdoor environments, including robot observations, YouTube video tours, and human-collected walking data allows us to train a policy that can outperform state-of-the-art methods on the zero-shot object navigation task in both success rate and precision.
LeLaN: Learning A Language-Conditioned Navigation Policy from In-the-Wild Video
[ "Noriaki Hirose", "Catherine Glossop", "Ajay Sridhar", "Oier Mees", "Sergey Levine" ]
Conference
Poster
[ "https://github.com/NHirose/learning-language-navigation" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://learning-language-navigation.github.io/
null
https://openreview.net/forum?id=yqLFb0RnDW
@inproceedings{ agia2024unpacking, title={Unpacking Failure Modes of Generative Policies: Runtime Monitoring of Consistency and Progress}, author={Christopher Agia and Rohan Sinha and Jingyun Yang and Ziang Cao and Rika Antonova and Marco Pavone and Jeannette Bohg}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=yqLFb0RnDW} }
Robot behavior policies trained via imitation learning are prone to failure under conditions that deviate from their training data. Thus, algorithms that monitor learned policies at test time and provide early warnings of failure are necessary to facilitate scalable deployment. We propose Sentinel, a runtime monitoring framework that splits the detection of failures into two complementary categories: 1) Erratic failures, which we detect using statistical measures of temporal action consistency, and 2) task progression failures, where we use Vision Language Models (VLMs) to detect when the policy confidently and consistently takes actions that do not solve the task. Our approach has two key strengths. First, because learned policies exhibit diverse failure modes, combining complementary detectors leads to significantly higher accuracy at failure detection. Second, using a statistical temporal action consistency measure ensures that we quickly detect when multimodal, generative policies exhibit erratic behavior at negligible computational cost. In contrast, we only use VLMs to detect modes that are less time-sensitive. We demonstrate our approach in the context of diffusion policies trained on robotic mobile manipulation domains in both simulation and the real world. By unifying temporal consistency detection and VLM runtime monitoring, Sentinel detects 18% more failures than using either of the two detectors alone and significantly outperforms baselines, thus highlighting the importance of assigning specialized detectors to complementary categories of failure. Qualitative results are made available at sites.google.com/stanford.edu/sentinel.
Unpacking Failure Modes of Generative Policies: Runtime Monitoring of Consistency and Progress
[ "Christopher Agia", "Rohan Sinha", "Jingyun Yang", "Ziang Cao", "Rika Antonova", "Marco Pavone", "Jeannette Bohg" ]
Conference
Poster
2410.04640
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sites.google.com/stanford.edu/sentinel
null
https://openreview.net/forum?id=ypaYtV1CoG
@inproceedings{ grannen2024vocal, title={Vocal Sandbox: Continual Learning and Adaptation for Situated Human-Robot Collaboration}, author={Jennifer Grannen and Siddharth Karamcheti and Suvir Mirchandani and Percy Liang and Dorsa Sadigh}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=ypaYtV1CoG} }
We introduce Vocal Sandbox, a framework for enabling seamless human-robot collaboration in situated environments. Systems in our framework are characterized by their ability to *adapt and continually learn* at multiple levels of abstraction from diverse teaching modalities such as spoken dialogue, object keypoints, and kinesthetic demonstrations. To enable such adaptation, we design lightweight and interpretable learning algorithms that allow users to build an understanding and co-adapt to a robot's capabilities in real-time, as they teach new behaviors. For example, after demonstrating a new low-level skill for "tracking around" an object, users are provided with trajectory visualizations of the robot's intended motion when asked to track a new object. Similarly, users teach high-level planning behaviors through spoken dialogue, using pretrained language models to synthesize behaviors such as "packing an object away" as compositions of low-level skills -- concepts that can be reused and built upon. We evaluate Vocal Sandbox in two settings: collaborative gift bag assembly and LEGO stop-motion animation. In the first setting, we run systematic ablations and user studies with 8 non-expert participants, highlighting the impact of multi-level teaching. Across 23 hours of total robot interaction time, users teach 17 new high-level behaviors with an average of 16 novel low-level skills, requiring 22.1% less active supervision compared to baselines. Qualitatively, users strongly prefer Vocal Sandbox systems due to their ease of use (+31.2%), helpfulness (+13.0%), and overall performance (+18.2%). Finally, we pair an experienced system-user with a robot to film a stop-motion animation; over two hours of continuous collaboration, the user teaches progressively more complex motion skills to produce a 52 second (232 frame) movie. Videos & Supplementary Material: https://vocal-sandbox.github.io
Vocal Sandbox: Continual Learning and Adaptation for Situated Human-Robot Collaboration
[ "Jennifer Grannen", "Siddharth Karamcheti", "Suvir Mirchandani", "Percy Liang", "Dorsa Sadigh" ]
Conference
Oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://vocal-sandbox.github.io
null
https://openreview.net/forum?id=ylZHvlwUcI
@inproceedings{ shang2024theia, title={Theia: Distilling Diverse Vision Foundation Models for Robot Learning}, author={Jinghuan Shang and Karl Schmeckpeper and Brandon B. May and Maria Vittoria Minniti and Tarik Kelestemur and David Watkins and Laura Herlant}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=ylZHvlwUcI} }
Vision-based robot policy learning, which maps visual inputs to actions, necessitates a holistic understanding of diverse visual tasks beyond single-task needs like classification or segmentation. Inspired by this, we introduce Theia, a vision foundation model for robot learning that distills multiple off-the-shelf vision foundation models trained on varied vision tasks. Theia's rich visual representations encode diverse visual knowledge, enhancing downstream robot learning. Extensive experiments demonstrate that Theia outperforms its teacher models and prior robot learning models using less training data and smaller model sizes. Additionally, we quantify the quality of pre-trained visual representations and hypothesize that higher entropy in feature norm distributions leads to improved robot learning performance. Code, models, and demo are available at https://theia.theaiinstitute.com.
Theia: Distilling Diverse Vision Foundation Models for Robot Learning
[ "Jinghuan Shang", "Karl Schmeckpeper", "Brandon B. May", "Maria Vittoria Minniti", "Tarik Kelestemur", "David Watkins", "Laura Herlant" ]
Conference
Poster
2407.20179
[ "https://github.com/bdaiinstitute/theia" ]
https://huggingface.co/papers/2407.20179
7
45
3
7
[ "theaiinstitute/theia-base-patch16-224-cdiv", "theaiinstitute/theia-tiny-patch16-224-cddsv", "theaiinstitute/theia-small-patch16-224-cdiv", "theaiinstitute/theia-tiny-patch16-224-cdiv", "theaiinstitute/theia-base-patch16-224-cddsv", "theaiinstitute/theia-small-patch16-224-cddsv" ]
[]
[]
[ "theaiinstitute/theia-base-patch16-224-cdiv", "theaiinstitute/theia-tiny-patch16-224-cddsv", "theaiinstitute/theia-small-patch16-224-cdiv", "theaiinstitute/theia-tiny-patch16-224-cdiv", "theaiinstitute/theia-base-patch16-224-cddsv", "theaiinstitute/theia-small-patch16-224-cddsv" ]
[]
[]
1
https://theia.theaiinstitute.com/
null
https://openreview.net/forum?id=yYujuPxjDK
@inproceedings{ oelerich2024languageguided, title={Language-guided Manipulator Motion Planning with Bounded Task Space}, author={Thies Oelerich and Christian Hartl-Nesic and Andreas Kugi}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=yYujuPxjDK} }
Language-based robot control is a powerful and versatile method to control a robot manipulator where large language models (LLMs) are used to reason about the environment. However, the generated robot motions by these controllers often lack safety and performance, resulting in jerky movements. In this work, a novel modular framework for zero-shot motion planning for manipulation tasks is developed. The modular components do not require any motion-planning-specific training. An LLM is combined with a vision model to create Python code that interacts with a novel path planner, which creates a piecewise linear reference path with bounds around the path that ensure safety. An optimization-based planner, the BoundMPC framework, is utilized to execute optimal, safe, and collision-free trajectories along the reference path. The effectiveness of the approach is shown on various everyday manipulation tasks in simulation and experiment, shown in the video at www.acin.tuwien.ac.at/42d2.
Language-guided Manipulator Motion Planning with Bounded Task Space
[ "Thies Oelerich", "Christian Hartl-Nesic", "Andreas Kugi" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ySI0tBYxpz
@inproceedings{ mitchell2024gaitor, title={Gaitor: Learning a Unified Representation Across Gaits for Real-World Quadruped Locomotion}, author={Alexander Luis Mitchell and Wolfgang Merkt and Aristotelis Papatheodorou and Ioannis Havoutis and Ingmar Posner}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=ySI0tBYxpz} }
The current state-of-the-art in quadruped locomotion is able to produce a variety of complex motions. These methods either rely on switching between a discrete set of skills or learn a distribution across gaits using complex black-box models. Alternatively, we present Gaitor, which learns a disentangled and 2D representation across locomotion gaits. This learnt representation forms a planning space for closed-loop control delivering continuous gait transitions and perceptive terrain traversal. Gaitor’s latent space is readily interpretable and we discover that during gait transitions, novel unseen gaits emerge. The latent space is disentangled with respect to footswing heights and lengths. This means that these gait characteristics can be varied independently in the 2D latent representation. Together with a simple terrain encoding and a learnt planner operating in the latent space, Gaitor can take motion commands including desired gait type and swing characteristics all while reacting to uneven terrain. We evaluate Gaitor in both simulation and the real world on the ANYmal C platform. To the best of our knowledge, this is the first work learning a unified and interpretable latent space for multiple gaits, resulting in continuous blending between different locomotion modes on a real quadruped robot. An overview of the methods and results in this paper is found at https://youtu.be/eVFQbRyilCA.
Gaitor: Learning a Unified Representation Across Gaits for Real-World Quadruped Locomotion
[ "Alexander Luis Mitchell", "Wolfgang Merkt", "Aristotelis Papatheodorou", "Ioannis Havoutis", "Ingmar Posner" ]
Conference
Poster
2405.19452
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=yNQu9zqx6X
@inproceedings{ xue2024robust, title={Robust Manipulation Primitive Learning via Domain Contraction}, author={Teng Xue and Amirreza Razmjoo and Suhan Shetty and Sylvain Calinon}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=yNQu9zqx6X} }
Contact-rich manipulation plays an important role in everyday life, but uncertain parameters pose significant challenges to model-based planning and control. To address this issue, domain adaptation and domain randomization have been proposed to learn robust policies. However, they either lose the generalization ability to diverse instances or perform conservatively due to neglecting instance-specific information. In this paper, we propose a bi-level approach to learn robust manipulation primitives, including parameter-augmented policy learning using multiple models with tensor approximation, and parameter-conditioned policy retrieval through domain contraction. This approach unifies domain randomization and domain adaptation, providing optimal behaviors while keeping generalization ability. We validate the proposed method on three contact-rich manipulation primitives: hitting, pushing, and reorientation. The experimental results showcase the superior performance of our approach in generating robust policies for instances with diverse physical parameters.
Robust Manipulation Primitive Learning via Domain Contraction
[ "Teng Xue", "Amirreza Razmjoo", "Suhan Shetty", "Sylvain Calinon" ]
Conference
Poster
2410.11600
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sites.google.com/view/robustpl
null
https://openreview.net/forum?id=y8XkuQIrvI
@inproceedings{ papagiannis2024miles, title={{MILES}: Making Imitation Learning Easy with Self-Supervision}, author={Georgios Papagiannis and Edward Johns}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=y8XkuQIrvI} }
Data collection in imitation learning often requires significant, laborious human supervision, such as numerous demonstrations and/or frequent environment resets for methods that incorporate reinforcement learning. In this work, we propose an alternative approach, MILES: a fully autonomous, self-supervised data collection paradigm and show that this enables efficient policy learning from just a single demonstration and a single environment reset. Our method, MILES, autonomously learns a policy for returning to and then following the single demonstration, whilst being self-guided during data collection, eliminating the need for additional human interventions. We evaluate MILES across several real-world tasks, including tasks that require precise contact-rich manipulation, and find that, under the constraints of a single demonstration and no repeated environment resetting, MILES significantly outperforms state-of-the-art alternatives like reinforcement learning and inverse reinforcement learning. Videos of our experiments, code, and supplementary material can be found on our website: www.robot-learning.uk/miles.
MILES: Making Imitation Learning Easy with Self-Supervision
[ "Georgios Papagiannis", "Edward Johns" ]
Conference
Poster
2410.19693
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://www.robot-learning.uk/miles
null
https://openreview.net/forum?id=xeFKtSXPMd
@inproceedings{ sanghvi2024occam, title={{OCCAM}: Online Continuous Controller Adaptation with Meta-Learned Models}, author={Hersh Sanghvi and Spencer Folk and Camillo Jose Taylor}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=xeFKtSXPMd} }
Control tuning and adaptation present a significant challenge to the usage of robots in diverse environments. It is often nontrivial to find a single set of control parameters by hand that work well across the broad array of environments and conditions that a robot might encounter. Automated adaptation approaches must utilize prior knowledge about the system while adapting to significant domain shifts to find new control parameters quickly. In this work, we present a general framework for online controller adaptation that deals with these challenges. We combine meta-learning with Bayesian recursive estimation to learn prior predictive models of system performance that quickly adapt to online data, even when there is significant domain shift. These predictive models can be used as cost functions within efficient sampling-based optimization routines to find new control parameters online that maximize system performance. Our framework is powerful and flexible enough to adapt controllers for four diverse systems: a simulated race car, a simulated quadrupedal robot, and a simulated and physical quadrotor.
OCCAM: Online Continuous Controller Adaptation with Meta-Learned Models
[ "Hersh Sanghvi", "Spencer Folk", "Camillo Jose Taylor" ]
Conference
Oral
2406.17620
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://hersh500.github.io/occam
null
https://openreview.net/forum?id=xcBH8Jhmbi
@inproceedings{ wang2024discovering, title={Discovering Robotic Interaction Modes with Discrete Representation Learning}, author={Liquan Wang and Ankit Goyal and Haoping Xu and Animesh Garg}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=xcBH8Jhmbi} }
Abstract: Human actions manipulating articulated objects, such as opening and closing a drawer, can be categorized into multiple modalities we define as interaction modes. Traditional robot learning approaches lack discrete representations of these modes, which are crucial for empirical sampling and grounding. In this paper, we present ActAIM2, which learns a discrete representation of robot manipulation interaction modes in a purely unsupervised fashion, without the use of expert labels or simulator-based privileged information. Utilizing novel data collection methods involving simulator rollouts, ActAIM2 consists of an interaction mode selector and a low-level action predictor. The selector generates discrete representations of potential interaction modes with self-supervision, while the predictor outputs corresponding action trajectories. Our method is validated through its success rate in manipulating articulated objects and its robustness in sampling meaningful actions from the discrete representation. Extensive experiments demonstrate ActAIM2’s effectiveness in enhancing manipulability and generalizability over baselines and ablation studies. For videos and additional results, see our website: https://actaim2.github.io/.
Discovering Robotic Interaction Modes with Discrete Representation Learning
[ "Liquan Wang", "Ankit Goyal", "Haoping Xu", "Animesh Garg" ]
Conference
Poster
[ "https://github.com/pairlab/ActAIM.git" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://actaim2.github.io/
null
https://openreview.net/forum?id=xYleTh2QhS
@inproceedings{ yu2024adaptive, title={Adaptive Diffusion Terrain Generator for Autonomous Uneven Terrain Navigation}, author={Youwei Yu and Junhong Xu and Lantao Liu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=xYleTh2QhS} }
Model-free reinforcement learning has emerged as a powerful method for developing robust robot control policies capable of navigating through complex and unstructured terrains. The effectiveness of these methods hinges on two essential elements: (1) the use of massively parallel physics simulations to expedite policy training, and (2) the deployment of an environment generator tasked with crafting terrains that are sufficiently challenging yet attainable, thereby facilitating continuous policy improvement. Existing methods of environment generation often rely on heuristics constrained by a set of parameters, limiting the diversity and realism. In this work, we introduce the Adaptive Diffusion Terrain Generator (ADTG), a novel method that leverages Denoising Diffusion Probabilistic Models (DDPMs) to dynamically expand an existing training environment by adding more diverse and complex terrains tailored to the current policy. Unlike conventional methods, ADTG adapts the terrain complexity and variety based on the evolving capabilities of the current policy. This is achieved through two primary mechanisms: First, by blending terrains from the initial dataset within their latent spaces using performance-informed weights, ADTG creates terrains that suitably challenge the policy. Secondly, by manipulating the initial noise in the diffusion process, ADTG seamlessly shifts between creating similar terrains for fine-tuning the current policy and entirely novel ones for expanding training diversity. Our experiments show that the policy trained by ADTG outperforms both procedural generated and natural environments, along with popular navigation methods.
Adaptive Diffusion Terrain Generator for Autonomous Uneven Terrain Navigation
[ "Youwei Yu", "Junhong Xu", "Lantao Liu" ]
Conference
Poster
2410.10766
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://adtg-sim-to-real.github.io
null
https://openreview.net/forum?id=x6eVHn6PIe
@inproceedings{ zhou2024robogolf, title={RoboGolf: Mastering Real-World Minigolf with a Reflective Multi-Modality Vision-Language Model}, author={Hantao Zhou and Tianying Ji and Jianwei Dr. Zhang and Fuchun Sun and Huazhe Xu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=x6eVHn6PIe} }
Minigolf, a game with countless court layouts, and complex ball motion, constitutes a compelling real-world testbed for the study of embodied intelligence. As it not only challenges spatial and kinodynamic reasoning but also requires reflective and corrective capacities to address erroneously designed courses. We introduce RoboGolf, a framework that perceives dual-camera visual inputs with nested VLM-empowered closed-loop control and reflective equilibrium loop. Extensive experiments demonstrate the effectiveness of RoboGolf on challenging minigolf courts including those that are impossible to finish. Experiment videos are available at https://realworldrobogolf.github.io/.
RoboGolf: Mastering Real-World Minigolf with a Reflective Multi-Modality Vision-Language Model
[ "Hantao Zhou", "Tianying Ji", "Jianwei Dr. Zhang", "Fuchun Sun", "Huazhe Xu" ]
Conference
Poster
2406.10157
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=wcbrhPnOei
@inproceedings{ zargarbashi2024robotkeyframing, title={RobotKeyframing: Learning Locomotion with High-Level Objectives via Mixture of Dense and Sparse Rewards}, author={Fatemeh Zargarbashi and Jin Cheng and Dongho Kang and Robert Sumner and Stelian Coros}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=wcbrhPnOei} }
This paper presents a novel learning-based control framework that uses keyframing to incorporate high-level objectives in natural locomotion for legged robots. These high-level objectives are specified as a variable number of partial or complete pose targets that are spaced arbitrarily in time. Our proposed framework utilizes a multi-critic reinforcement learning algorithm to effectively handle the mixture of dense and sparse rewards. Additionally, it employs a transformer-based encoder to accommodate a variable number of input targets, each associated with specific time-to-arrivals. Throughout simulation and hardware experiments, we demonstrate that our framework can effectively satisfy the target keyframe sequence at the required times. In the experiments, the multi-critic method significantly reduces the effort of hyperparameter tuning compared to the standard single-critic alternative. Moreover, the proposed transformer-based architecture enables robots to anticipate future goals, which results in quantitative improvements in their ability to reach their targets.
RobotKeyframing: Learning Locomotion with High-Level Objectives via Mixture of Dense and Sparse Rewards
[ "Fatemeh Zargarbashi", "Jin Cheng", "Dongho Kang", "Robert Sumner", "Stelian Coros" ]
Conference
Poster
2407.11562
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sites.google.com/view/robot-keyframing
null
https://openreview.net/forum?id=wTKJge0PTq
@inproceedings{ zhang2024hirt, title={Hi{RT}: Enhancing Robotic Control with Hierarchical Robot Transformers}, author={Jianke Zhang and Yanjiang Guo and Xiaoyu Chen and Yen-Jen Wang and Yucheng Hu and Chengming Shi and Jianyu Chen}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=wTKJge0PTq} }
Large Vision-Language-Action (VLA) models, leveraging powerful pre-trained Vision-Language Models (VLMs) backends, have shown promise in robotic control due to their impressive generalization ability. However, the success comes at a cost. Their reliance on VLM backends with billions of parameters leads to high computational costs and inference latency, limiting the testing scenarios to mainly quasi-static tasks and hindering performance in dynamic tasks requiring rapid interactions. To address these limitations, this paper proposes \textbf{HiRT}, a \textbf{Hi}erarchical \textbf{R}obot \textbf{T}ransformer framework that enables flexible frequency and performance trade-off. HiRT keeps VLMs running at low frequencies to capture temporarily invariant features while enabling real-time interaction through a high-frequency vision-based policy guided by the slowly updated features. Experiment results in both simulation and real-world settings demonstrate significant improvements over baseline methods. Empirically, we achieve a 58\% reduction in inference time delay while maintaining comparable success rates. Additionally, on novel dynamic manipulation benchmarks which are challenging for previous VLA models, HiRT improves the success rate from 48% to 75%.
HiRT: Enhancing Robotic Control with Hierarchical Robot Transformers
[ "Jianke Zhang", "Yanjiang Guo", "Xiaoyu Chen", "Yen-Jen Wang", "Yucheng Hu", "Chengming Shi", "Jianyu Chen" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=wSWMsjuMTI
@inproceedings{ liu2024maniwav, title={Mani{WAV}: Learning Robot Manipulation from In-the-Wild Audio-Visual Data}, author={Zeyi Liu and Cheng Chi and Eric Cousineau and Naveen Kuppuswamy and Benjamin Burchfiel and Shuran Song}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=wSWMsjuMTI} }
Audio signals provide rich information for the robot interaction and object properties through contact. These information can surprisingly ease the learning of contact-rich robot manipulation skills, especially when the visual information alone is ambiguous or incomplete. However, the usage of audio data in robot manipulation has been constrained to teleoperated demonstrations collected by either attaching a microphone to the robot or object, which significantly limits its usage in robot learning pipelines. In this work, we introduce ManiWAV: an 'ear-in-hand' data collection device to collect in-the-wild human demonstrations with synchronous audio and visual feedback, and a corresponding policy interface to learn robot manipulation policy directly from the demonstrations. We demonstrate the capabilities of our system through four contact-rich manipulation tasks that require either passively sensing the contact events and modes, or actively sensing the object surface materials and states. In addition, we show that our system can generalize to unseen in-the-wild environments, by learning from diverse in-the-wild human demonstrations. All data, code, and policy will be public.
ManiWAV: Learning Robot Manipulation from In-the-Wild Audio-Visual Data
[ "Zeyi Liu", "Cheng Chi", "Eric Cousineau", "Naveen Kuppuswamy", "Benjamin Burchfiel", "Shuran Song" ]
Conference
Poster
2406.19464
[ "https://github.com/real-stanford/maniwav" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://maniwav.github.io/
null
https://openreview.net/forum?id=wH7Wv0nAm8
@inproceedings{ zhao2024bilevel, title={Bi-Level Motion Imitation for Humanoid Robots}, author={Wenshuai Zhao and Yi Zhao and Joni Pajarinen and Michael Muehlebach}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=wH7Wv0nAm8} }
Imitation learning from human motion capture (MoCap) data provides a promising way to train humanoid robots. However, due to differences in morphology, such as varying degrees of joint freedom and force limits, exact replication of human behaviors may not be feasible for humanoid robots. Consequently, incorporating physically infeasible MoCap data in training datasets can adversely affect the performance of the robot policy. To address this issue, we propose a bi-level optimization-based imitation learning framework that alternates between optimizing both the robot policy and the target MoCap data. Specifically, we first develop a generative latent dynamics model using a novel self-consistent auto-encoder, which learns sparse and structured motion representations while capturing desired motion patterns in the dataset. The dynamics model is then utilized to generate reference motions while the latent representation regularizes the bi-level motion imitation process. Simulations conducted with a realistic model of a humanoid robot demonstrate that our method enhances the robot policy by modifying reference motions to be physically consistent.
Bi-Level Motion Imitation for Humanoid Robots
[ "Wenshuai Zhao", "Yi Zhao", "Joni Pajarinen", "Michael Muehlebach" ]
Conference
Poster
2410.01968
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sites.google.com/view/bmi-corl2024
null
https://openreview.net/forum?id=wD2kUVLT1g
@inproceedings{ wang2024equivariant, title={Equivariant Diffusion Policy}, author={Dian Wang and Stephen Hart and David Surovik and Tarik Kelestemur and Haojie Huang and Haibo Zhao and Mark Yeatman and Jiuguang Wang and Robin Walters and Robert Platt}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=wD2kUVLT1g} }
Recent work has shown diffusion models are an effective approach to learning the multimodal distributions arising from demonstration data in behavior cloning. However, a drawback of this approach is the need to learn a denoising function, which is significantly more complex than learning an explicit policy. In this work, we propose Equivariant Diffusion Policy, a novel diffusion policy learning method that leverages domain symmetries to obtain better sample efficiency and generalization in the denoising function. We theoretically analyze the $\mathrm{SO}(2)$ symmetry of full 6-DoF control and characterize when a diffusion model is $\mathrm{SO}(2)$-equivariant. We furthermore evaluate the method empirically on a set of 12 simulation tasks in MimicGen, and show that it obtains a success rate that is, on average, 21.9\% higher than the baseline Diffusion Policy. We also evaluate the method on a real-world system to show that effective policies can be learned with relatively few training samples, whereas the baseline Diffusion Policy cannot.
Equivariant Diffusion Policy
[ "Dian Wang", "Stephen Hart", "David Surovik", "Tarik Kelestemur", "Haojie Huang", "Haibo Zhao", "Mark Yeatman", "Jiuguang Wang", "Robin Walters", "Robert Platt" ]
Conference
Oral
2407.01812
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://equidiff.github.io
null
https://openreview.net/forum?id=vtEn8NJWlz
@inproceedings{ chisari2024learning, title={Learning Robotic Manipulation Policies from Point Clouds with Conditional Flow Matching}, author={Eugenio Chisari and Nick Heppert and Max Argus and Tim Welschehold and Thomas Brox and Abhinav Valada}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=vtEn8NJWlz} }
Learning from expert demonstrations is a popular approach to train robotic manipulation policies from limited data. However, imitation learning algorithms require a number of design choices ranging from the input modality, training objective, and 6-DoF end-effector pose representation. Diffusion-based methods have gained popularity as they allow to predict long horizon trajectories and handle multimodal action distributions. Recently, Conditional Flow Matching (CFM) (or Rectified Flow) has been proposed as a more flexible generalization of diffusion models. In this paper we investigate the application of CFM in the context of robotic policy learning, and specifically study the interplay with the other design choices required to build an imitation learning algorithm. We show that CFM gives the best performance when combined with point cloud input observations. Additionally, we study the feasibility of a CFM formulation on the SO(3) manifold and evaluate its suitability with a simplified example. We perform extensive experiments on RLBench which demonstrate that our proposed PointFlowMatch approach achieves a state-of-the-art average success rate of 67.8% over eight tasks, double the performance of the next best method.
Learning Robotic Manipulation Policies from Point Clouds with Conditional Flow Matching
[ "Eugenio Chisari", "Nick Heppert", "Max Argus", "Tim Welschehold", "Thomas Brox", "Abhinav Valada" ]
Conference
Poster
2409.07343
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
http://pointflowmatch.cs.uni-freiburg.de
null
https://openreview.net/forum?id=vobaOY0qDl
@inproceedings{ bruedigam2024a, title={A Versatile Planner for Learning Dexterous and Whole-body Manipulation}, author={Jan Bruedigam and Ali Adeeb Abbas and Maks Sorokin and Kuan Fang and Brandon Hung and Maya Guru and Stefan Georg Sosnowski and Jiuguang Wang and Sandra Hirche and Simon Le Cleac'h}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=vobaOY0qDl} }
Robotic manipulation is challenging due to discontinuous dynamics, as well as high-dimensional state and action spaces. Data-driven approaches that succeed in manipulation tasks require large amounts of data and expert demonstrations, typically from humans. Existing planners are restricted to specific systems and often depend on specialized algorithms for using demonstrations. Therefore, we introduce a flexible motion planner tailored to dexterous and whole-body manipulation tasks. Our planner creates readily usable demonstrations for reinforcement learning algorithms, eliminating the need for additional training pipeline complexities. With this approach, we can efficiently learn policies for complex manipulation tasks, where traditional reinforcement learning alone only makes little progress. Furthermore, we demonstrate that learned policies are transferable to real robotic systems for solving complex dexterous manipulation tasks. Project website: https://jacta-manipulation.github.io/
Jacta: A Versatile Planner for Learning Dexterous and Whole-body Manipulation
[ "Jan Bruedigam", "Ali Adeeb Abbas", "Maks Sorokin", "Kuan Fang", "Brandon Hung", "Maya Guru", "Stefan Georg Sosnowski", "Jiuguang Wang", "Sandra Hirche", "Simon Le Cleac'h" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://jacta-manipulation.github.io/
null
https://openreview.net/forum?id=vhGkyWgctu
@inproceedings{ pandit2024learning, title={Learning Decentralized Multi-Biped Control for Payload Transport}, author={Bikram Pandit and Ashutosh Gupta and Mohitvishnu S. Gadde and Addison Johnson and Aayam Kumar Shrestha and Helei Duan and Jeremy Dao and Alan Fern}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=vhGkyWgctu} }
Payload transport over flat terrain via multi-wheel robot carriers is well-understood, highly effective, and configurable. In this paper, our goal is to provide similar effectiveness and configurability for transport over rough terrain that is more suitable for legs rather than wheels. For this purpose, we consider multi-biped robot carriers, where wheels are replaced by multiple bipedal robots attached to the carrier. Our main contribution is to design a decentralized controller for such systems that can be effectively applied to varying numbers and configurations of rigidly attached bipedal robots without retraining. We present a reinforcement learning approach for training the controller in simulation that supports transfer to the real world. Our experiments in simulation provide quantitative metrics showing the effectiveness of the approach over a wide variety of simulated transport scenarios. In addition, we demonstrate the controller in the real-world for systems composed of two and three Cassie robots. To our knowledge, this is the first example of a scalable multi-biped payload transport system.
Learning Decentralized Multi-Biped Control for Payload Transport
[ "Bikram Pandit", "Ashutosh Gupta", "Mohitvishnu S. Gadde", "Addison Johnson", "Aayam Kumar Shrestha", "Helei Duan", "Jeremy Dao", "Alan Fern" ]
Conference
Poster
2406.17279
[ "https://github.com/osudrl/roadrunner/tree/paper/decmbc" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://decmbc.github.io
null
https://openreview.net/forum?id=vBj5oC60Lk
@inproceedings{ stachowicz2024lifelong, title={Lifelong Autonomous Improvement of Navigation Foundation Models in the Wild}, author={Kyle Stachowicz and Lydia Ignatova and Sergey Levine}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=vBj5oC60Lk} }
Recent works have proposed a number of general-purpose robotic foundation models that can control a variety of robotic platforms to perform a range of different tasks, including in the domains of navigation and manipulation. However, such models are typically trained via imitation learning, which precludes the ability to improve autonomously through experience that the robot gathers on the job. In this work, our aim is to train general-purpose robotic foundation models in the domain of robotic navigation specifically with the aim of enabling autonomous self-improvement. We show that a combination of pretraining with offline reinforcement learning and a complete system for continual autonomous operation leads to a robotic learning framework that not only starts off with broad and diverse capabilities, but can further improve and adapt those capabilities in the course of carrying out navigational tasks in a given deployment location. To our knowledge, our model LiReN is the first navigation robot foundation model that is capable of fine-tuning with autonomous online data in open-world settings.
Lifelong Autonomous Improvement of Navigation Foundation Models in the Wild
[ "Kyle Stachowicz", "Lydia Ignatova", "Sergey Levine" ]
Conference
Poster
[ "https://github.com/kylestach/lifelong-nav-rl" ]
https://huggingface.co/papers/2407.20798
1
23
2
4
[]
[]
[]
[]
[]
[]
1
https://kylestach.github.io/lifelong-nav-rl/
null
https://openreview.net/forum?id=ueBmGhLOXP
@inproceedings{ yang2024equibot, title={EquiBot: {SIM}(3)-Equivariant Diffusion Policy for Generalizable and Data Efficient Learning}, author={Jingyun Yang and Ziang Cao and Congyue Deng and Rika Antonova and Shuran Song and Jeannette Bohg}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=ueBmGhLOXP} }
Building effective imitation learning methods that enable robots to learn from limited data and still generalize across diverse real-world environments is a long-standing problem in robot learning. We propose EquiBot, a robust, data-efficient, and generalizable approach for robot manipulation task learning. Our approach combines SIM(3)-equivariant neural network architectures with diffusion models. This ensures that our learned policies are invariant to changes in scale, rotation, and translation, enhancing their applicability to unseen environments while retaining the benefits of diffusion-based policy learning such as multi-modality and robustness. We show on a suite of 6 simulation tasks that our proposed method reduces the data requirements and improves generalization to novel scenarios. In the real world, with 10 variations of 6 mobile manipulation tasks, we show that our method can easily generalize to novel objects and scenes after learning from just 5 minutes of human demonstrations in each task.
EquiBot: SIM(3)-Equivariant Diffusion Policy for Generalizable and Data Efficient Learning
[ "Jingyun Yang", "Ziang Cao", "Congyue Deng", "Rika Antonova", "Shuran Song", "Jeannette Bohg" ]
Conference
Poster
2407.01479
[ "https://github.com/yjy0625/equibot" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
http://equi-bot.github.io
null
https://openreview.net/forum?id=ubq7Co6Cbv
@inproceedings{ quach2024gaussian, title={Gaussian Splatting to Real World Flight Navigation Transfer with Liquid Networks}, author={Alex Quach and Makram Chahine and Alexander Amini and Ramin Hasani and Daniela Rus}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=ubq7Co6Cbv} }
Simulators are powerful tools for autonomous robot learning as they offer scalable data generation, flexible design, and optimization of trajectories. However, transferring behavior learned from simulation data into the real world proves to be difficult, usually mitigated with compute-heavy domain randomization methods or further model fine-tuning. We present a method to improve generalization and robustness to distribution shifts in sim-to-real visual quadrotor navigation tasks. To this end, we first build a simulator by integrating Gaussian Splatting with quadrotor flight dynamics, and then, train robust navigation policies using Liquid neural networks. In this way, we obtain a full-stack imitation learning protocol that combines advances in 3D Gaussian splatting radiance field rendering, crafty programming of expert demonstration training data, and the task understanding capabilities of Liquid networks. Through a series of quantitative flight tests, we demonstrate the robust transfer of navigation skills learned in a single simulation scene directly to the real world. We further show the ability to maintain performance beyond the training environment under drastic distribution and physical environment changes. Our learned Liquid policies, trained on single target maneuvers curated from a photorealistic simulated indoor flight only, generalize to multi-step hikes onboard a real hardware platform outdoors.
Gaussian Splatting to Real World Flight Navigation Transfer with Liquid Networks
[ "Alex Quach", "Makram Chahine", "Alexander Amini", "Ramin Hasani", "Daniela Rus" ]
Conference
Poster
2406.15149
[ "https://github.com/alexquach/multienv_sim" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sites.google.com/view/gs2real-flight/home
null
https://openreview.net/forum?id=uMZ2jnZUDX
@inproceedings{ long2024learning, title={Learning H-Infinity Locomotion Control}, author={Junfeng Long and Wenye Yu and Quanyi Li and ZiRui Wang and Dahua Lin and Jiangmiao Pang}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=uMZ2jnZUDX} }
Stable locomotion in precipitous environments is an essential task for quadruped robots, requiring the ability to resist various external disturbances. Recent neural policies enhance robustness against disturbances by learning to resist external forces sampled from a fixed distribution in the simulated environment. However, the force generation process doesn’t consider the robot’s current state, making it difficult to identify the most effective direction and magnitude that can push the robot to the most unstable but recoverable state. Thus, challenging cases in the buffer are insufficient to optimize robustness. In this paper, we propose to model the robust locomotion learning process as an adversarial interaction between the locomotion policy and a learnable disturbance that is conditioned on the robot state to generate appropriate external forces. To make the joint optimization stable, our novel $H_{\infty}$ constraint mandates the bound of the ratio between the cost and the intensity of the external forces. We verify the robustness of our approach in both simulated environments and real-world deployment, on quadrupedal locomotion tasks and a more challenging task where the quadruped performs locomotion merely on hind legs. Training and deployment code will be made public.
Learning H-Infinity Locomotion Control
[ "Junfeng Long", "Wenye Yu", "Quanyi Li", "ZiRui Wang", "Dahua Lin", "Jiangmiao Pang" ]
Conference
Poster
2404.14405
[ "https://github.com/openrobotlab/himloco" ]
https://huggingface.co/papers/2404.14405
4
6
1
6
[]
[]
[]
[]
[]
[]
1
https://junfeng-long.github.io/HINF/
null
https://openreview.net/forum?id=uJBMZ6S02T
@inproceedings{ cai2024realtosim, title={Real-to-Sim Grasp: Rethinking the Gap between Simulation and Real World in Grasp Detection}, author={Jia-Feng Cai and Zibo Chen and Xiao-Ming Wu and Jian-Jian Jiang and Yi-Lin Wei and Wei-Shi Zheng}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=uJBMZ6S02T} }
For 6-DoF grasp detection, simulated data is expandable to train more powerful model, but it faces the challenge of the large gap between simulation and real world. Previous works bridge this gap with a sim-to-real way. However, this way explicitly or implicitly forces the simulated data to adapt to the noisy real data when training grasp detectors, where the positional drift and structural distortion within the camera noise will harm the grasp learning. In this work, we propose a Real-to-Sim framework for 6-DoF Grasp detection, named R2SGrasp, with the key insight of bridging this gap in a real-to-sim way, which directly bypasses the camera noise in grasp detector training through an inference-time real-to-sim adaption. To achieve this real-to-sim adaptation, our R2SGrasp designs the Real-to-Sim Data Repairer (R2SRepairer) to mitigate the camera noise of real depth maps in data-level, and the Real-to-Sim Feature Enhancer (R2SEnhancer) to enhance real features with precise simulated geometric primitives in feature-level. To endow our framework with the generalization ability, we construct a large-scale simulated dataset cost-efficiently to train our grasp detector, which includes 64,000 RGB-D images with 14.4 million grasp annotations. Sufficient experiments show that R2SGrasp is powerful and our real-to-sim perspective is effective. The real-world experiments further show great generalization ability of R2SGrasp. Project page is available on https://isee-laboratory.github.io/R2SGrasp.
Real-to-Sim Grasp: Rethinking the Gap between Simulation and Real World in Grasp Detection
[ "Jia-Feng Cai", "Zibo Chen", "Xiao-Ming Wu", "Jian-Jian Jiang", "Yi-Lin Wei", "Wei-Shi Zheng" ]
Conference
Poster
2410.06521
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://isee-laboratory.github.io/R2SGrasp
null
https://openreview.net/forum?id=uHdVI3QMr6
@inproceedings{ sikchi2024a, title={A Dual Approach to Imitation Learning from Observations with Offline Datasets}, author={Harshit Sikchi and Caleb Chuck and Amy Zhang and Scott Niekum}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=uHdVI3QMr6} }
Demonstrations are an effective alternative to task specification for learning agents in settings where designing a reward function is difficult. However, demonstrating expert behavior in the action space of the agent becomes unwieldy when robots have complex, unintuitive morphologies. We consider the practical setting where an agent has a dataset of prior interactions with the environment and is provided with observation-only expert demonstrations. Typical learning from observations approaches have required either learning an inverse dynamics model or a discriminator as intermediate steps of training. Errors in these intermediate one-step models compound during downstream policy learning or deployment. We overcome these limitations by directly learning a multi-step utility function that quantifies how each action impacts the agent's divergence from the expert's visitation distribution. Using the principle of duality, we derive DILO (Dual Imitation Learning from Observations), an algorithm that can leverage arbitrary suboptimal data to learn imitating policies without requiring expert actions. DILO reduces the learning from observations problem to that of simply learning an actor and a critic, bearing similar complexity to vanilla offline RL. This allows DILO to gracefully scale to high dimensional observations, and demonstrate improved performance across the board.
A Dual Approach to Imitation Learning from Observations with Offline Datasets
[ "Harshit Sikchi", "Caleb Chuck", "Amy Zhang", "Scott Niekum" ]
Conference
Poster
2406.08805
[ "https://github.com/hari-sikchi/DILO" ]
https://huggingface.co/papers/2303.17156
0
0
0
3
[]
[]
[]
[]
[]
[]
1
https://hari-sikchi.github.io/dilo/
null
https://openreview.net/forum?id=uEbJXWobif
@inproceedings{ zhang2024extract, title={{EXTRACT}: Efficient Policy Learning by Extracting Transferrable Robot Skills from Offline Data}, author={Jesse Zhang and Minho Heo and Zuxin Liu and Erdem Biyik and Joseph J Lim and Yao Liu and Rasool Fakoor}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=uEbJXWobif} }
Most reinforcement learning (RL) methods focus on learning optimal policies over low-level action spaces. While these methods can perform well in their training environments, they lack the flexibility to transfer to new tasks. Instead, RL agents that can act over useful, temporally extended skills rather than low-level actions can learn new tasks more easily. Prior work in skill-based RL either requires expert supervision to define useful skills, which is hard to scale, or learns a skill-space from offline data with heuristics that limit the adaptability of the skills, making them difficult to transfer during downstream RL. Our approach, EXTRACT, instead utilizes pre-trained vision language models to extract a discrete set of semantically meaningful skills from offline data, each of which is parameterized by continuous arguments, without human supervision. This skill parameterization allows robots to learn new tasks by only needing to learn when to select a specific skill and how to modify its arguments for the specific task. We demonstrate through experiments in sparse-reward, image-based, robot manipulation environments that EXTRACT can more quickly learn new tasks than prior works, with major gains in sample efficiency and performance over prior skill-based RL.
EXTRACT: Efficient Policy Learning by Extracting Transferable Robot Skills from Offline Data
[ "Jesse Zhang", "Minho Heo", "Zuxin Liu", "Erdem Biyik", "Joseph J Lim", "Yao Liu", "Rasool Fakoor" ]
Conference
Poster
2406.17768
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
jessezhang.net/projects/extract
null
https://openreview.net/forum?id=ty1cqzTtUv
@inproceedings{ sundaresan2024rtsketch, title={{RT}-Sketch: Goal-Conditioned Imitation Learning from Hand-Drawn Sketches}, author={Priya Sundaresan and Quan Vuong and Jiayuan Gu and Peng Xu and Ted Xiao and Sean Kirmani and Tianhe Yu and Michael Stark and Ajinkya Jain and Karol Hausman and Dorsa Sadigh and Jeannette Bohg and Stefan Schaal}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=ty1cqzTtUv} }
Natural language and images are commonly used as goal representations in goal-conditioned imitation learning. However, language can be ambiguous and images can be over-specified. In this work, we study hand-drawn sketches as a modality for goal specification. Sketches can be easy to provide on the fly like language, but like images they can also help a downstream policy to be spatially-aware. By virtue of being minimal, sketches can further help disambiguate task-relevant from irrelevant objects. We present RT-Sketch, a goal-conditioned policy for manipulation that takes a hand-drawn sketch of the desired scene as input, and outputs actions. We train RT-Sketch on a dataset of trajectories paired with synthetically generated goal sketches. We evaluate this approach on six manipulation skills involving tabletop object rearrangements on an articulated countertop. Experimentally we find that RT-Sketch performs comparably to image or language-conditioned agents in straightforward settings, while achieving greater robustness when language goals are ambiguous or visual distractors are present. Additionally, we show that RT-Sketch handles sketches with varied levels of specificity, ranging from minimal line drawings to detailed, colored drawings. For supplementary material and videos, please visit http://rt-sketch.github.io.
RT-Sketch: Goal-Conditioned Imitation Learning from Hand-Drawn Sketches
[ "Priya Sundaresan", "Quan Vuong", "Jiayuan Gu", "Peng Xu", "Ted Xiao", "Sean Kirmani", "Tianhe Yu", "Michael Stark", "Ajinkya Jain", "Karol Hausman", "Dorsa Sadigh", "Jeannette Bohg", "Stefan Schaal" ]
Conference
Oral
2403.02709
[ "" ]
https://huggingface.co/papers/2403.02709
6
7
1
13
[]
[]
[]
[]
[]
[]
1
rt-sketch.github.io
null
https://openreview.net/forum?id=tqsQGrmVEu
@inproceedings{ tian2024viewinvariant, title={View-Invariant Policy Learning via Zero-Shot Novel View Synthesis}, author={Stephen Tian and Blake Wulfe and Kyle Sargent and Katherine Liu and Sergey Zakharov and Vitor Campagnolo Guizilini and Jiajun Wu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=tqsQGrmVEu} }
Large-scale visuomotor policy learning is a promising approach toward developing generalizable manipulation systems. Yet, policies that can be deployed on diverse embodiments, environments, and observational modalities remain elusive. In this work, we investigate how knowledge from large-scale visual data of the world may be used to address one axis of variation for generalizable manipulation: observational viewpoint. Specifically, we study single-image novel view synthesis models, which learn 3D-aware scene-level priors by rendering images of the same scene from alternate camera viewpoints given a single input image. For practical application to diverse robotic data, these models must operate *zero-shot*, performing view synthesis on unseen tasks and environments. We empirically analyze view synthesis models within a simple data-augmentation scheme that we call View Synthesis Augmentation (VISTA) to understand their capabilities for learning viewpoint-invariant policies from single-viewpoint demonstration data. Upon evaluating the robustness of policies trained with our method to out-of-distribution camera viewpoints, we find that they outperform baselines in both simulated and real-world manipulation tasks.
View-Invariant Policy Learning via Zero-Shot Novel View Synthesis
[ "Stephen Tian", "Blake Wulfe", "Kyle Sargent", "Katherine Liu", "Sergey Zakharov", "Vitor Campagnolo Guizilini", "Jiajun Wu" ]
Conference
Poster
2409.03685
[ "https://github.com/s-tian/VISTA" ]
https://huggingface.co/papers/2409.03685
0
1
0
7
[]
[]
[]
[]
[]
[]
1
https://s-tian.github.io/projects/vista/
null
https://openreview.net/forum?id=t0LkF9JnVb
@inproceedings{ qian2024pianomime, title={PianoMime: Learning a Generalist, Dexterous Piano Player from Internet Demonstrations}, author={Cheng Qian and Julen Urain and Kevin Zakka and Jan Peters}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=t0LkF9JnVb} }
In this work, we introduce PianoMime, a framework for training a piano-playing agent using internet demonstrations. The internet is a promising source of large-scale demonstrations for training our robot agents. In particular, for the case of piano-playing, Youtube is full of videos of professional pianists playing a wide myriad of songs. In our work, we leverage these demonstrations to learn a generalist piano-playing agent capable of playing any arbitrary song. Our framework is divided into three parts: a data preparation phase to extract the informative features from the Youtube videos, a policy learning phase to train song-specific expert policies from the demonstrations and a policy distillation phase to distil the policies into a single generalist agent. We explore different policy designs to represent the agent and evaluate the influence of the amount of training data on the generalization capability of the agent to novel songs not available in the dataset. We show that we are able to learn a policy with up to 57% F1 score on unseen songs.
PianoMime: Learning a Generalist, Dexterous Piano Player from Internet Demonstrations
[ "Cheng Qian", "Julen Urain", "Kevin Zakka", "Jan Peters" ]
Conference
Poster
2407.18178
[ "https://github.com/sNiper-Qian/pianomime" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
pianomime.github.io
null
https://openreview.net/forum?id=s31IWg2kN5
@inproceedings{ agha2024exploring, title={Exploring Under Constraints with Model-Based Actor-Critic and Safety Filters}, author={Ahmed Agha and Baris Kayalibay and Atanas Mirchev and Patrick van der Smagt and Justin Bayer}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=s31IWg2kN5} }
Applying reinforcement learning (RL) to learn effective policies on physical robots without supervision remains challenging when it comes to tasks where safe exploration is critical. Constrained model-based RL (CMBRL) presents a promising approach to this problem. These methods are designed to learn constraint-adhering policies through constrained optimization approaches. Yet, such policies often fail to meet stringent safety requirements during learning and exploration. Our solution ``CASE'' aims to reduce the instances where constraints are breached during the learning phase. Specifically, CASE integrates techniques for optimizing constrained policies and employs planning-based safety filters as backup policies, effectively lowering constraint violations during learning and making it a more reliable option than other recent constrained model-based policy optimization methods.
Exploring Under Constraints with Model-Based Actor-Critic and Safety Filters
[ "Ahmed Agha", "Baris Kayalibay", "Atanas Mirchev", "Patrick van der Smagt", "Justin Bayer" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=s0vHSq5QEv
@inproceedings{ dong2024generalizing, title={Generalizing End-To-End Autonomous Driving In Real-World Environments Using Zero-Shot {LLM}s}, author={Zeyu Dong and Yimin Zhu and Yansong Li and Kevin Mahon and Yu Sun}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=s0vHSq5QEv} }
Traditional autonomous driving methods adopt modular design, decomposing tasks into sub-tasks, including perception, prediction, planning, and control. In contrast, end-to-end autonomous driving directly outputs actions from raw sensor data, avoiding error accumulation. However, training an end-to-end model requires a comprehensive dataset. Without adequate data, the end-to-end model exhibits poor generalization capabilities. Recently, large language models (LLMs) have been applied to enhance the generalization property of end-to-end driving models. Most studies explore LLMs in an open-loop manner, where the output actions are compared to those of experts without direct activation in the real world. Other studies in closed-loop settings examine their results in simulated environments. In comparison, this paper proposes an efficient architecture that integrates multimodal LLMs into end-to-end real-world driving models in a closed-loop setting. The LLM periodically takes raw sensor data to generate high-level driving instructions. In our architecture, LLMs can effectively guide the end-to-end model, even at a slower rate than the raw sensor data, because updates aren't needed every time frame. This architecture relaxes the trade-off between the latency and inference quality of the LLM. It also allows us to choose a wide variety of LLMs to improve high-level driving instructions and minimize fine-tuning costs. Consequently, our architecture reduces the data collection requirements because the LLMs do not directly output actions, and we only need to train a simple imitation learning model to output actions. In our experiments, the training data for the end-to-end model in a real-world environment consists of only simple obstacle configurations with one traffic cone, while the test environment is more complex and contains different types of obstacles. Experiments show that the proposed architecture enhances the generalization capabilities of the end-to-end model even without fine-tuning the LLM.
Generalizing End-To-End Autonomous Driving In Real-World Environments Using Zero-Shot LLMs
[ "Zeyu Dong", "Yimin Zhu", "Yansong Li", "Kevin Mahon", "Yu Sun" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=s0VNSnPeoA
@inproceedings{ thumm2024textinteraction, title={Text2Interaction: Establishing Safe and Preferable Human-Robot Interaction}, author={Jakob Thumm and Christopher Agia and Marco Pavone and Matthias Althoff}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=s0VNSnPeoA} }
Adjusting robot behavior to human preferences can require intensive human feedback, preventing quick adaptation to new users and changing circumstances. Moreover, current approaches typically treat user preferences as a reward, which requires a manual balance between task success and user satisfaction. To integrate new user preferences in a zero-shot manner, our proposed Text2Interaction framework invokes large language models to generate a task plan, motion preferences as Python code, and parameters of a safety controller. By maximizing the combined probability of task completion and user satisfaction instead of a weighted sum of rewards, we can reliably find plans that fulfill both requirements. We find that 83% of users working with Text2Interaction agree that it integrates their preferences into the plan of the robot, and 94% prefer Text2Interaction over the baseline. Our ablation study shows that Text2Interaction aligns better with unseen preferences than other baselines while maintaining a high success rate. Real-world demonstrations and code are made available at [sites.google.com/view/text2interaction](sites.google.com/view/text2interaction).
Text2Interaction: Establishing Safe and Preferable Human-Robot Interaction
[ "Jakob Thumm", "Christopher Agia", "Marco Pavone", "Matthias Althoff" ]
Conference
Poster
2408.06105
[ "https://github.com/JakobThumm/text2interaction" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sites.google.com/view/text2interaction/
null
https://openreview.net/forum?id=rvKWXxIvj0
@inproceedings{ cai2024nonrigid, title={Non-rigid Relative Placement through 3D Dense Diffusion}, author={Eric Cai and Octavian Donca and Ben Eisner and David Held}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=rvKWXxIvj0} }
The task of "relative placement" is to predict the placement of one object in relation to another, e.g. placing a mug on a mug rack. Recent methods for relative placement have made tremendous progress towards data-efficient learning for robot manipulation; using explicit object-centric geometric reasoning, these approaches enable generalization to unseen task variations from a small number of demonstrations. State-of-the-art works in this area, however, have yet to represent deformable transformations, despite the ubiquity of non-rigid bodies in real world settings. As a first step towards bridging this gap, we propose "cross-displacement" - an extension of the principles of relative placement to geometric relationships between deformable objects - and present a novel vision-based method to learn cross-displacement for a non-rigid task through dense diffusion. To this end, we demonstrate our method's ability to generalize to unseen object instances, out-of-distribution scene configurations, and multimodal goals on a highly deformable cloth-hanging task beyond the scope of prior works.
Non-rigid Relative Placement through 3D Dense Diffusion
[ "Eric Cai", "Octavian Donca", "Ben Eisner", "David Held" ]
Conference
Poster
2410.19247
[ "" ]
https://huggingface.co/papers/2401.09048
3
8
2
5
[]
[]
[]
[]
[]
[]
1
https://sites.google.com/view/tax3d-corl-2024
null
https://openreview.net/forum?id=rY5T2aIjPZ
@inproceedings{ xie2024deligrasp, title={DeliGrasp: Inferring Object Properties with {LLM}s for Adaptive Grasp Policies}, author={William Xie and Maria Valentini and Jensen Lavering and Nikolaus Correll}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=rY5T2aIjPZ} }
Large language models (LLMs) can provide rich physical descriptions of most worldly objects, allowing robots to achieve more informed and capable grasping. We leverage LLMs' common sense physical reasoning and code-writing abilities to infer an object's physical characteristics-mass $m$, friction coefficient $\mu$, and spring constant $k$-from a semantic description, and then translate those characteristics into an executable adaptive grasp policy. Using a two-finger gripper with a built-in depth camera that can control its torque by limiting motor current, we demonstrate that LLM-parameterized but first-principles grasp policies outperform both traditional adaptive grasp policies and direct LLM-as-code policies on a custom benchmark of 12 delicate and deformable items including food, produce, toys, and other everyday items, spanning two orders of magnitude in mass and required pick-up force. We then improve property estimation and grasp performance on variable size objects with model finetuning on property-based comparisons and eliciting such comparisons via chain-of-thought prompting. We also demonstrate how compliance feedback from DeliGrasp policies can aid in downstream tasks such as measuring produce ripeness. Our code and videos are available at: https://deligrasp.github.io
DeliGrasp: Inferring Object Properties with LLMs for Adaptive Grasp Policies
[ "William Xie", "Maria Valentini", "Jensen Lavering", "Nikolaus Correll" ]
Conference
Poster
2403.07832
[ "https://github.com/deligrasp/deligrasp" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
deligrasp.github.io
null
https://openreview.net/forum?id=rThtgkXuvZ
@inproceedings{ cheng2024nodtamp, title={{NOD}-{TAMP}: Generalizable Long-Horizon Planning with Neural Object Descriptors}, author={Shuo Cheng and Caelan Reed Garrett and Ajay Mandlekar and Danfei Xu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=rThtgkXuvZ} }
Solving complex manipulation tasks in household and factory settings remains challenging due to long-horizon reasoning, fine-grained interactions, and broad object and scene diversity. Learning skills from demonstrations can be an effective strategy, but such methods often have limited generalizability beyond training data and struggle to solve long-horizon tasks. To overcome this, we propose to synergistically combine two paradigms: Neural Object Descriptors (NODs) that produce generalizable object-centric features and Task and Motion Planning (TAMP) frameworks that chain short-horizon skills to solve multi-step tasks. We introduce NOD-TAMP, a TAMP-based framework that extracts short manipulation trajectories from a handful of human demonstrations, adapts these trajectories using NOD features, and composes them to solve broad long-horizon, contact-rich tasks. NOD-TAMP solves existing manipulation benchmarks with a handful of demonstrations and significantly outperforms prior NOD-based approaches on new tabletop manipulation tasks that require diverse generalization. Finally, we deploy NOD-TAMP on a number of real-world tasks, including tool-use and high-precision insertion. For more details, please visit https://nodtamp.github.io/.
NOD-TAMP: Generalizable Long-Horizon Planning with Neural Object Descriptors
[ "Shuo Cheng", "Caelan Reed Garrett", "Ajay Mandlekar", "Danfei Xu" ]
Conference
Poster
2311.01530
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://nodtamp.github.io/
null
https://openreview.net/forum?id=rRpmVq6yHv
@inproceedings{ hirose2024selfi, title={{SELFI}: Autonomous Self-Improvement with {RL} for Vision-Based Navigation around People}, author={Noriaki Hirose and Dhruv Shah and Kyle Stachowicz and Ajay Sridhar and Sergey Levine}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=rRpmVq6yHv} }
Autonomous self-improving robots that interact and improve with experience are key to the real-world deployment of robotic systems. In this paper, we propose an online learning method, SELFI, that leverages online robot experience to rapidly fine-tune pre-trained control policies efficiently. SELFI applies online model-free reinforcement learning on top of offline model-based learning to bring out the best parts of both learning paradigms. Specifically, SELFI stabilizes the online learning process by incorporating the same model-based learning objective from offline pre-training into the Q-values learned with online model-free reinforcement learning. We evaluate SELFI in multiple real-world environments and report improvements in terms of collision avoidance, as well as more socially compliant behavior, measured by a human user study. SELFI enables us to quickly learn useful robotic behaviors with less human interventions such as pre-emptive behavior for the pedestrians, collision avoidance for small and transparent objects, and avoiding travel on uneven floor surfaces. We provide supplementary videos to demonstrate the performance of our fine-tuned policy.
SELFI: Autonomous Self-Improvement with RL for Vision-Based Navigation around People
[ "Noriaki Hirose", "Dhruv Shah", "Kyle Stachowicz", "Ajay Sridhar", "Sergey Levine" ]
Conference
Oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sites.google.com/view/selfi-rl/
null
https://openreview.net/forum?id=rEteJcq61j
@inproceedings{ liao2024toward, title={Toward General Object-level Mapping from Sparse Views with 3D Diffusion Priors}, author={Ziwei Liao and Binbin Xu and Steven L. Waslander}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=rEteJcq61j} }
Object-level mapping builds a 3D map of objects in a scene with detailed shapes and poses from multi-view sensor observations. Conventional methods struggle to build complete shapes and estimate accurate poses due to partial occlusions and sensor noise. They require dense observations to cover all objects, which is challenging to achieve in robotics trajectories. Recent work introduces generative shape priors for object-level mapping from sparse views, but is limited to single-category objects. In this work, we propose a General Object-level Mapping system, GOM, which leverages a 3D diffusion model as shape prior with multi-category support and outputs Neural Radiance Fields (NeRFs) for both texture and geometry for all objects in a scene. GOM includes an effective formulation to guide a pre-trained diffusion model with extra nonlinear constraints from sensor measurements without finetuning. We also develop a probabilistic optimization formulation to fuse multi-view sensor observations and diffusion priors for joint 3D object pose and shape estimation. Our GOM system demonstrates superior multi-category mapping performance from sparse views, and achieves more accurate mapping results compared to state-of-the-art methods on the real-world benchmarks. We will release our code and model upon publication.
Toward General Object-level Mapping from Sparse Views with 3D Diffusion Priors
[ "Ziwei Liao", "Binbin Xu", "Steven L. Waslander" ]
Conference
Poster
2410.05514
[ "https://github.com/TRAILab/GeneralObjectMapping" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=r6ZhiVYriY
@inproceedings{ curtis2024trust, title={Trust the {PR}oC3S: Solving Long-Horizon Robotics Problems with {LLM}s and Constraint Satisfaction}, author={Aidan Curtis and Nishanth Kumar and Jing Cao and Tom{\'a}s Lozano-P{\'e}rez and Leslie Pack Kaelbling}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=r6ZhiVYriY} }
Recent developments in pretrained large language models (LLMs) applied to robotics have demonstrated their capacity for sequencing a set of discrete skills to achieve open-ended goals in simple robotic tasks. In this paper, we examine the topic of LLM planning for a set of *continuously parameterized* skills whose execution must avoid violations of a set of kinematic, geometric, and physical constraints. We prompt the LLM to output code for a function with open parameters, which, together with environmental constraints, can be viewed as a Continuous Constraint Satisfaction Problem (CCSP). This CCSP can be solved through sampling or optimization to find a skill sequence and continuous parameter settings that achieve the goal while avoiding constraint violations. Additionally, we consider cases where the LLM proposes unsatisfiable CCSPs, such as those that are kinematically infeasible, dynamically unstable, or lead to collisions, and re-prompt the LLM to form a new CCSP accordingly. Experiments across simulated and real-world domains demonstrate that our proposed strategy, \OursNoSpace, is capable of solving a wide range of complex manipulation tasks with realistic constraints much more efficiently and effectively than existing baselines.
Trust the PRoC3S: Solving Long-Horizon Robotics Problems with LLMs and Constraint Satisfaction
[ "Aidan Curtis", "Nishanth Kumar", "Jing Cao", "Tomás Lozano-Pérez", "Leslie Pack Kaelbling" ]
Conference
Poster
2406.05572
[ "https://github.com/Learning-and-Intelligent-Systems/proc3s" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://proc3s.csail.mit.edu
null
https://openreview.net/forum?id=qoebyrnF36
@inproceedings{ quan2024control, title={Control with Patterns: A D-learning Method}, author={Quan Quan and Kai-Yuan Cai and Chenyu Wang}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=qoebyrnF36} }
Learning-based control policies are widely used in various tasks in the field of robotics and control. However, formal (Lyapunov) stability guarantees for learning-based controllers with nonlinear dynamical systems are challenging to obtain. We propose a novel control approach, namely Control with Patterns (CWP), to address the stability issue over data sets corresponding to nonlinear dynamical systems. For data sets of this kind, we introduce a new definition, namely exponential attraction on data sets, to describe nonlinear dynamical systems under consideration. The problem of exponential attraction on data sets is converted to a pattern classification one based on the data sets and parameterized Lyapunov functions. Furthermore, D-learning is proposed as a method for performing CWP without knowledge of the system dynamics. Finally, the effectiveness of CWP based on D-learning is demonstrated through simulations and real flight experiments. In these experiments, the position of the multicopter is stabilized using only real-time images as feedback, which can be considered as an Image-Based Visual Servoing (IBVS) problem.
Control with Patterns: A D-learning Method
[ "Quan Quan", "Kai-Yuan Cai", "Chenyu Wang" ]
Conference
Poster
2206.03809
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qUSa3F79am
@inproceedings{ myers2024policy, title={Policy Adaptation via Language Optimization: Decomposing Tasks for Few-Shot Imitation}, author={Vivek Myers and Chunyuan Zheng and Oier Mees and Kuan Fang and Sergey Levine}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=qUSa3F79am} }
Learned language-conditioned robot policies often struggle to effectively adapt to new real-world tasks even when pre-trained across a diverse set of instructions. We propose a novel approach for few-shot adaptation to unseen tasks that exploits the semantic understanding of task decomposition provided by vision-language models (VLMs). Our method, Policy Adaptation via Language Optimization (PALO), combines a handful of demonstrations of a task with proposed language decompositions sampled from a VLM to quickly enable rapid nonparametric adaptation, avoiding the need for a larger fine-tuning dataset. We evaluate PALO on extensive real-world experiments consisting of challenging unseen, long-horizon robot manipulation tasks. We find that PALO is able of consistently complete long-horizon, multi-tier tasks in the real world, outperforming state of the art pre-trained generalist policies, and methods that have access to the same demonstrations.
Policy Adaptation via Language Optimization: Decomposing Tasks for Few-Shot Imitation
[ "Vivek Myers", "Chunyuan Zheng", "Oier Mees", "Kuan Fang", "Sergey Levine" ]
Conference
Poster
2408.16228
[ "https://github.com/vivekmyers/palo-robot" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://palo-website.github.io
null
https://openreview.net/forum?id=pcPSGZFaCH
@inproceedings{ whitney2024modeling, title={Modeling the Real World with High-Density Visual Particle Dynamics}, author={William F Whitney and Jake Varley and Deepali Jain and Krzysztof Marcin Choromanski and Sumeet Singh and Vikas Sindhwani}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=pcPSGZFaCH} }
We present High-Density Visual Particle Dynamics (HD-VPD), a learned world model that can emulate the physical dynamics of real scenes by processing massive latent point clouds containing 100K+ particles. To enable efficiency at this scale, we introduce a novel family of Point Cloud Transformers (PCTs) called Interlacers leveraging intertwined linear-attention Performer layers and graph-based neighbour attention layers. We demonstrate the capabilities of HD-VPD by modeling the dynamics of high degree-of-freedom bi-manual robots with two RGB-D cameras. Compared to the previous graph neural network approach, our Interlacer dynamics is twice as fast with the same prediction quality, and can achieve higher quality using 4x as many particles. We illustrate how HD-VPD can evaluate motion plan quality with robotic box pushing and can grasping tasks. See videos and particle dynamics rendered by HD-VPD at https://sites.google.com/view/hd-vpd.
Modeling the Real World with High-Density Visual Particle Dynamics
[ "William F Whitney", "Jake Varley", "Deepali Jain", "Krzysztof Marcin Choromanski", "Sumeet Singh", "Vikas Sindhwani" ]
Conference
Poster
2406.19800
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sites.google.com/view/hd-vpd
null
https://openreview.net/forum?id=pPhTsonbXq
@inproceedings{ ji2024graspsplats, title={GraspSplats: Efficient Manipulation with 3D Feature Splatting}, author={Mazeyu Ji and Ri-Zhao Qiu and Xueyan Zou and Xiaolong Wang}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=pPhTsonbXq} }
The ability for robots to perform efficient and zero-shot grasping of object parts is crucial for practical applications and is becoming prevalent with recent advances in Vision-Language Models (VLMs). To bridge the 2D-to-3D gap for representations to support such a capability, existing methods rely on neural fields (NeRFs) via differentiable rendering or point-based projection methods. However, we demonstrate that NeRFs are inappropriate for scene changes due to its implicitness and point-based methods are inaccurate for part localization without rendering-based optimization. To amend these issues, we propose GraspSplats. Using depth supervision and a novel reference feature computation method, GraspSplats can generate high-quality scene representations under 60 seconds. We further validate the advantages of Gaussian-based representation by showing that the explicit and optimized geometry in GraspSplats is sufficient to natively support (1) real-time grasp sampling and (2) dynamic and articulated object manipulation with point trackers. With extensive experiments on a Franka robot, we demonstrate that GraspSplats significantly outperforms existing methods under diverse task settings. In particular, GraspSplats outperforms NeRF-based methods like F3RM and LERF-TOGO, and 2D detection methods. The code will be released.
GraspSplats: Efficient Manipulation with 3D Feature Splatting
[ "Mazeyu Ji", "Ri-Zhao Qiu", "Xueyan Zou", "Xiaolong Wang" ]
Conference
Poster
2409.02084
[ "https://github.com/jimazeyu/GraspSplats" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://graspsplats.github.io/
null
https://openreview.net/forum?id=p6Wq6TjjHH
@inproceedings{ mishra2024generative, title={Generative Factor Chaining: Coordinated Manipulation with Diffusion-based Factor Graph}, author={Utkarsh Aashu Mishra and Yongxin Chen and Danfei Xu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=p6Wq6TjjHH} }
Learning to plan for multi-step, multi-manipulator tasks is notoriously difficult because of the large search space and the complex constraint satisfaction problems. We present Generative Factor Chaining (GFC), a composable generative model for planning. GFC represents a planning problem as a spatial-temporal factor graph, where nodes represent objects and robots in the scene, spatial factors capture the distributions of valid relationships among nodes, and temporal factors represent the distributions of skill transitions. Each factor is implemented as a modular diffusion model, which are composed during inference to generate feasible long-horizon plans through bi-directional message passing. We show that GFC can solve complex bimanual manipulation tasks and exhibits strong generalization to unseen planning tasks with novel combinations of objects and constraints. More details can be found at: https://sites.google.com/view/generative-factor-chaining
Generative Factor Chaining: Coordinated Manipulation with Diffusion-based Factor Graph
[ "Utkarsh Aashu Mishra", "Yongxin Chen", "Danfei Xu" ]
Conference
Poster
2409.16275
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://generative-fc.github.io/
null
https://openreview.net/forum?id=ovjxugn9Q2
@inproceedings{ kasaei2024softmanisim, title={SoftManiSim: A Fast Simulation Framework for Multi-Segment Continuum Manipulators Tailored for Robot Learning}, author={Mohammadreza Kasaei and Hamidreza Kasaei and Mohsen Khadem}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=ovjxugn9Q2} }
This paper introduces SoftManiSim, a novel simulation framework for multi-segment continuum manipulators. Existing continuum robot simulators often rely on simplifying assumptions, such as constant curvature bending or ignoring contact forces, to meet real-time simulation and training demands. To bridge this gap, we propose a robust and rapid mathematical model for continuum robots at the core of SoftManiSim, ensuring precise and adaptable simulations. The framework can integrate with various rigid-body robots, increasing its utility across different robotic platforms. SoftManiSim supports parallel operations for simultaneous simulations of multiple robots and generates synthetic data essential for training deep reinforcement learning models. This capability enhances the development and optimization of control strategies in dynamic environments. Extensive simulations validate the framework's effectiveness, demonstrating its capabilities in handling complex robotic interactions and tasks. We also present real robot validation to showcase the simulator's practical applicability and accuracy in real-world settings. To our knowledge, SoftManiSim is the first open-source real-time simulator capable of modeling continuum robot behavior under dynamic point/distributed loading. It enables rapid deployment in reinforcement learning and machine learning applications. This simulation framework can be downloaded from https://github.com/MohammadKasaei/SoftManiSim.
SoftManiSim: A Fast Simulation Framework for Multi-Segment Continuum Manipulators Tailored for Robot Learning
[ "Mohammadreza Kasaei", "Hamidreza Kasaei", "Mohsen Khadem" ]
Conference
Poster
[ "https://github.com/MohammadKasaei/SoftManiSim" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=oSU7M7MK6B
@inproceedings{ ferrandis2024learning, title={Learning Visuotactile Estimation and Control for Non-prehensile Manipulation under Occlusions}, author={Juan Del Aguila Ferrandis and Joao Moura and Sethu Vijayakumar}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=oSU7M7MK6B} }
Manipulation without grasping, known as non-prehensile manipulation, is essential for dexterous robots in contact-rich environments, but presents many challenges relating with underactuation, hybrid-dynamics, and frictional uncertainty. Additionally, object occlusions in a scenario of contact uncertainty and where the motion of the object evolves independently from the robot becomes a critical problem, which previous literature fails to address. We present a method for learning visuotactile state estimators and uncertainty-aware control policies for non-prehensile manipulation under occlusions, by leveraging diverse interaction data from privileged policies trained in simulation. We formulate the estimator within a Bayesian deep learning framework, to model its uncertainty, and then train uncertainty-aware control policies by incorporating the pre-learned estimator into the reinforcement learning (RL) loop, both of which lead to significantly improved estimator and policy performance. Therefore, unlike prior non-prehensile research that relies on complex external perception set-ups, our method successfully handles occlusions after sim-to-real transfer to robotic hardware with a simple onboard camera.
Learning Visuotactile Estimation and Control for Non-prehensile Manipulation under Occlusions
[ "Juan Del Aguila Ferrandis", "Joao Moura", "Sethu Vijayakumar" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=oL1WEZQal8
@inproceedings{ he2024omniho, title={OmniH2O: Universal and Dexterous Human-to-Humanoid Whole-Body Teleoperation and Learning}, author={Tairan He and Zhengyi Luo and Xialin He and Wenli Xiao and Chong Zhang and Weinan Zhang and Kris M. Kitani and Changliu Liu and Guanya Shi}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=oL1WEZQal8} }
We present OmniH2O (Omni Human-to-Humanoid), a learning-based system for whole-body humanoid teleoperation and autonomy. Using kinematic pose as a universal control interface, OmniH2O enables various ways for a human to control a full-sized humanoid with dexterous hands, including using real-time teleoperation through VR headset, verbal instruction, and RGB camera. OmniH2O also enables full autonomy by learning from teleoperated demonstrations or integrating with frontier models such as GPT-4. OmniH2O demonstrates versatility and dexterity in various real-world whole-body tasks through teleoperation or autonomy, such as playing multiple sports, moving and manipulating objects, and interacting with humans. We develop an RL-based sim-to-real pipeline, which involves large-scale retargeting and augmentation of human motion datasets, learning a real-world deployable policy with sparse sensor input by imitating a privileged teacher policy, and reward designs to enhance robustness and stability. We release the first humanoid whole-body control dataset, OmniH2O-6, containing six everyday tasks, and demonstrate humanoid whole-body skill learning from teleoperated datasets. Videos at the anonymous website [https://anonymous-omni-h2o.github.io/](https://anonymous-omni-h2o.github.io/)
OmniH2O: Universal and Dexterous Human-to-Humanoid Whole-Body Teleoperation and Learning
[ "Tairan He", "Zhengyi Luo", "Xialin He", "Wenli Xiao", "Chong Zhang", "Weinan Zhang", "Kris M. Kitani", "Changliu Liu", "Guanya Shi" ]
Conference
Poster
2406.08858
[ "https://github.com/LeCAR-Lab/human2humanoid" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://omni.human2humanoid.com/
null
https://openreview.net/forum?id=nmEt0ci8hi
@inproceedings{ yuan2024general, title={General Flow as Foundation Affordance for Scalable Robot Learning}, author={Chengbo Yuan and Chuan Wen and Tong Zhang and Yang Gao}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=nmEt0ci8hi} }
We address the challenge of acquiring real-world manipulation skills with a scalable framework. We hold the belief that identifying an appropriate prediction target capable of leveraging large-scale datasets is crucial for achieving efficient and universal learning. Therefore, we propose to utilize 3D flow, which represents the future trajectories of 3D points on objects of interest, as an ideal prediction target. To exploit scalable data resources, we turn our attention to human videos. We develop, for the first time, a language-conditioned 3D flow prediction model directly from large-scale RGBD human video datasets. Our predicted flow offers actionable guidance, thus facilitating zero-shot skill transfer in real-world scenarios. We deploy our method with a policy based on closed-loop flow prediction. Remarkably, without any in-domain finetuning, our method achieves an impressive 81\% success rate in zero-shot human-to-robot skill transfer, covering 18 tasks in 6 scenes. Our framework features the following benefits: (1) scalability: leveraging cross-embodiment data resources; (2) wide application: multiple object categories, including rigid, articulated, and soft bodies; (3) stable skill transfer: providing actionable guidance with a small inference domain-gap.
General Flow as Foundation Affordance for Scalable Robot Learning
[ "Chengbo Yuan", "Chuan Wen", "Tong Zhang", "Yang Gao" ]
Conference
Poster
2401.11439
[ "https://github.com/michaelyuancb/general_flow" ]
https://huggingface.co/papers/2409.01083
1
18
2
2
[]
[]
[]
[]
[]
[]
1
https://general-flow.github.io/
null
https://openreview.net/forum?id=nVJm2RdPDu
@inproceedings{ huang2024diffuseloco, title={DiffuseLoco: Real-Time Legged Locomotion Control with Diffusion from Offline Datasets}, author={Xiaoyu Huang and Yufeng Chi and Ruofeng Wang and Zhongyu Li and Xue Bin Peng and Sophia Shao and Borivoje Nikolic and Koushil Sreenath}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=nVJm2RdPDu} }
Offline learning at scale has led to breakthroughs in computer vision, natural language processing, and robotic manipulation domains. However, scaling up learning for legged robot locomotion, especially with multiple skills in a single policy, presents significant challenges for prior online reinforcement learning (RL) methods. To address this challenge, we propose DiffuseLoco, a novel, scalable framework that leverages diffusion models to directly learn from offline multimodal datasets with a diverse set of locomotion skills. With design choices tailored for real-time control in dynamical systems, including receding horizon control and delayed inputs, DiffuseLoco is capable of reproducing multimodality in performing various locomotion skills, zero-shot transferred to real quadruped robots and deployed on edge computes. Through extensive real-world benchmarking, DiffuseLoco exhibits better stability and velocity tracking performance compared to prior RL and non-diffusion-based behavior cloning baselines. This work opens new possibilities for scaling up learning-based legged locomotion control through the scaling of large, expressive models and diverse offline datasets.
DiffuseLoco: Real-Time Legged Locomotion Control with Diffusion from Offline Datasets
[ "Xiaoyu Huang", "Yufeng Chi", "Ruofeng Wang", "Zhongyu Li", "Xue Bin Peng", "Sophia Shao", "Borivoje Nikolic", "Koushil Sreenath" ]
Conference
Poster
2404.19264
[ "https://github.com/HybridRobotics/DiffuseLoco" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://diffuselo.co/
null
https://openreview.net/forum?id=nQslM6f7dW
@inproceedings{ wang2024apricot, title={{APRICOT}: Active Preference Learning and Constraint-Aware Task Planning with {LLM}s}, author={Huaxiaoyue Wang and Nathaniel Chin and Gonzalo Gonzalez-Pumariega and Xiangwan Sun and Neha Sunkara and Maximus Adrian Pace and Jeannette Bohg and Sanjiban Choudhury}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=nQslM6f7dW} }
Home robots performing personalized tasks must adeptly balance user preferences with environmental affordances. We focus on organization tasks within constrained spaces, such as arranging items into a refrigerator, where preferences for placement collide with physical limitations. The robot must infer user preferences based on a small set of demonstrations, which is easier for users to provide than extensively defining all their requirements. While recent works use Large Language Models (LLMs) to learn preferences from user demonstrations, they encounter two fundamental challenges. First, there is inherent ambiguity in interpreting user actions, as multiple preferences can often explain a single observed behavior. Second, not all user preferences are practically feasible due to geometric constraints in the environment. To address these challenges, we introduce APRICOT, a novel approach that merges LLM-based Bayesian active preference learning with constraint-aware task planning. APRICOT refines its generated preferences by actively querying the user and dynamically adapts its plan to respect environmental constraints. We evaluate APRICOT on a dataset of diverse organization tasks and demonstrate its effectiveness in real-world scenarios, showing significant improvements in both preference satisfaction and plan feasibility.
APRICOT: Active Preference Learning and Constraint-Aware Task Planning with LLMs
[ "Huaxiaoyue Wang", "Nathaniel Chin", "Gonzalo Gonzalez-Pumariega", "Xiangwan Sun", "Neha Sunkara", "Maximus Adrian Pace", "Jeannette Bohg", "Sanjiban Choudhury" ]
Conference
Poster
2410.19656
[ "https://github.com/portal-cornell/apricot" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://portal-cornell.github.io/apricot/
null
https://openreview.net/forum?id=ma7McOiCZY
@inproceedings{ wang2024hypermotion, title={{HYPER}motion: Learning Hybrid Behavior Planning for Autonomous Loco-manipulation}, author={Jin Wang and Rui Dai and Weijie Wang and Luca Rossini and Francesco Ruscelli and Nikos Tsagarakis}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=ma7McOiCZY} }
Enabling robots to autonomously perform hybrid motions in diverse environments can be beneficial for long-horizon tasks such as material handling, household chores, and work assistance. This requires extensive exploitation of intrinsic motion capabilities, extraction of affordances from rich environmental information, and planning of physical interaction behaviors. Despite recent progress has demonstrated impressive humanoid whole-body control abilities, they struggle to achieve versatility and adaptability for new tasks. In this work, we propose HYPERmotion, a framework that learns, selects and plans behaviors based on tasks in different scenarios. We combine reinforcement learning with whole-body optimization to generate motion for 38 actuated joints and create a motion library to store the learned skills. We apply the planning and reasoning features of the large language models (LLMs) to complex loco-manipulation tasks, constructing a hierarchical task graph that comprises a series of primitive behaviors to bridge lower-level execution with higher-level planning. By leveraging the interaction of distilled spatial geometry and 2D observation with a visual language model (VLM) to ground knowledge into a robotic morphology selector to choose appropriate actions in single- or dual-arm, legged or wheeled locomotion. Experiments in simulation and real-world show that learned motions can efficiently adapt to new tasks, demonstrating high autonomy from free-text commands in unstructured scenes. Videos and website: hy-motion.github.io//
HYPERmotion: Learning Hybrid Behavior Planning for Autonomous Loco-manipulation
[ "Jin Wang", "Rui Dai", "Weijie Wang", "Luca Rossini", "Francesco Ruscelli", "Nikos Tsagarakis" ]
Conference
Poster
2406.14655
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://hy-motion.github.io/
null
https://openreview.net/forum?id=lyhS75loxe
@inproceedings{ huang2024avlm, title={A3{VLM}: Actionable Articulation-Aware Vision Language Model}, author={Siyuan Huang and Haonan Chang and Yuhan Liu and Yimeng Zhu and Hao Dong and Peng Gao and Abdeslam Boularias and Hongsheng Li}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=lyhS75loxe} }
Vision Language Models (VLMs) for robotics have received significant attention in recent years. As a VLM can understand robot observations and perform complex visual reasoning, it is regarded as a potential universal solution for general robotics challenges such as manipulation and navigation. However, previous robotics VLMs such as RT-1, RT-2, and ManipLLM have focused on directly learning robot actions. Such approaches require collecting a significant amount of robot interaction data, which is extremely costly in the real world. Thus, we propose A3VLM, an object-centric, actionable, articulation-aware vision language model. A3VLM focuses on the articulation structure and action affordances of objects. Its representation is robot-agnostic and can be translated into robot actions using simple action primitives. Extensive experiments in both simulation benchmarks and real-world settings demonstrate the effectiveness and stability of A3VLM.
A3VLM: Actionable Articulation-Aware Vision Language Model
[ "Siyuan Huang", "Haonan Chang", "Yuhan Liu", "Yimeng Zhu", "Hao Dong", "Abdeslam Boularias", "Peng Gao", "Hongsheng Li" ]
Conference
Poster
2406.07549
[ "https://github.com/changhaonan/A3VLM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=lt0Yf8Wh5O
@inproceedings{ liu2024differentiable, title={Differentiable Robot Rendering}, author={Ruoshi Liu and Alper Canberk and Shuran Song and Carl Vondrick}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=lt0Yf8Wh5O} }
Vision foundation models trained on massive amounts of visual data have shown unprecedented reasoning and planning skills in open-world settings. A key challenge in applying them to robotic tasks is the modality gap between visual data and action data. We introduce differentiable robot rendering, a method allowing the visual appearance of a robot body to be directly differentiable with respect to its control parameters. Our model integrates a kinematics-aware deformable model and Gaussians Splatting and is compatible with any robot form factors and degrees of freedom. We demonstrate its capability and usage in applications including reconstruction of robot poses from images and controlling robots through vision language models. Quantitative and qualitative results show that our differentiable rendering model provides effective gradients for robotic control directly from pixels, setting the foundation for the future applications of vision foundation models in robotics.
Differentiable Robot Rendering
[ "Ruoshi Liu", "Alper Canberk", "Shuran Song", "Carl Vondrick" ]
Conference
Oral
2410.13851
[ "https://github.com/cvlab-columbia/drrobot" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://drrobot.cs.columbia.edu/
null
https://openreview.net/forum?id=lpjPft4RQT
@inproceedings{ jiang2024transic, title={{TRANSIC}: Sim-to-Real Policy Transfer by Learning from Online Correction}, author={Yunfan Jiang and Chen Wang and Ruohan Zhang and Jiajun Wu and Li Fei-Fei}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=lpjPft4RQT} }
Learning in simulation and transferring the learned policy to the real world has the potential to enable generalist robots. The key challenge of this approach is to address simulation-to-reality (sim-to-real) gaps. Previous methods often require domain-specific knowledge *a priori*. We argue that a straightforward way to obtain such knowledge is by asking humans to observe and assist robot policy execution in the real world. The robots can then learn from humans to close various sim-to-real gaps. We propose TRANSIC, a data-driven approach to enable successful sim-to-real transfer based on a human-in-the-loop framework. TRANSIC allows humans to augment simulation policies to overcome various unmodeled sim-to-real gaps holistically through intervention and online correction. Residual policies can be learned from human corrections and integrated with simulation policies for autonomous execution. We show that our approach can achieve successful sim-to-real transfer in complex and contact-rich manipulation tasks such as furniture assembly. Through synergistic integration of policies learned in simulation and from humans, TRANSIC is effective as a holistic approach to addressing various, often coexisting sim-to-real gaps. It displays attractive properties such as scaling with human effort. Videos and code are available at https://transic-robot.github.io/.
TRANSIC: Sim-to-Real Policy Transfer by Learning from Online Correction
[ "Yunfan Jiang", "Chen Wang", "Ruohan Zhang", "Jiajun Wu", "Li Fei-Fei" ]
Conference
Poster
2405.10315
[ "https://github.com/transic-robot/transic" ]
https://huggingface.co/papers/2405.10315
3
10
0
5
[ "transic-robot/models" ]
[ "transic-robot/data" ]
[]
[ "transic-robot/models" ]
[ "transic-robot/data" ]
[]
1
https://transic-robot.github.io/
null
https://openreview.net/forum?id=lKGRPJFPCM
@inproceedings{ lee2024interact, title={Inter{ACT}: Inter-dependency Aware Action Chunking with Hierarchical Attention Transformers for Bimanual Manipulation}, author={Andrew Choong-Won Lee and Ian Chuang and Ling-Yuan Chen and Iman Soltani}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=lKGRPJFPCM} }
We present InterACT: Inter-dependency aware Action Chunking with Hierarchical Attention Transformers, a novel imitation learning framework for bimanual manipulation that integrates hierarchical attention to capture inter-dependencies between dual-arm joint states and visual inputs. InterACT consists of a Hierarchical Attention Encoder and a Multi-arm Decoder, both designed to enhance information aggregation and coordination. The encoder processes multi-modal inputs through segment-wise and cross-segment attention mechanisms, while the decoder leverages synchronization blocks to refine individual action predictions, providing the counterpart's prediction as context. Our experiments on a variety of simulated and real-world bimanual manipulation tasks demonstrate that InterACT significantly outperforms existing methods. Detailed ablation studies validate the contributions of key components of our work, including the impact of CLS tokens, cross-segment encoders, and synchronization blocks.
InterACT: Inter-dependency Aware Action Chunking with Hierarchical Attention Transformers for Bimanual Manipulation
[ "Andrew Choong-Won Lee", "Ian Chuang", "Ling-Yuan Chen", "Iman Soltani" ]
Conference
Poster
2409.07914
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://soltanilara.github.io/interact/
null
https://openreview.net/forum?id=kEZXeaMrkD
@inproceedings{ huang2024goalreaching, title={Goal-Reaching Policy Learning from Non-Expert Observations via Effective Subgoal Guidance}, author={RenMing Huang and Shaochong Liu and Yunqiang Pei and Peng Wang and Guoqing Wang and Yang Yang and Heng Tao Shen}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=kEZXeaMrkD} }
In this work, we address the challenging problem of long-horizon goal-reaching policy learning from non-expert, action-free observation data. Unlike fully labeled expert data, our data is more accessible and avoids the costly process of action labeling. Additionally, compared to online learning, which often involves aimless exploration, our data provides useful guidance for more efficient exploration. To achieve our goal, we propose a novel subgoal guidance learning strategy. The motivation behind this strategy is that long-horizon goals offer limited guidance for efficient exploration and accurate state transition. We develop a diffusion strategy-based high-level policy to generate reasonable subgoals as waypoints, preferring states that more easily lead to the final goal. Additionally, we learn state-goal value functions to encourage efficient subgoal reaching. These two components naturally integrate into the off-policy actor-critic framework, enabling efficient goal attainment through informative exploration. We evaluate our method on complex robotic navigation and manipulation tasks, demonstrating a significant performance advantage over existing methods. Our ablation study further shows that our method is robust to observation data with various corruptions.
Goal-Reaching Policy Learning from Non-Expert Observations via Effective Subgoal Guidance
[ "RenMing Huang", "Shaochong Liu", "Yunqiang Pei", "Peng Wang", "Guoqing Wang", "Yang Yang", "Heng Tao Shen" ]
Conference
Poster
2409.03996
[ "https://github.com/RenMing-Huang/EGR-PO" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=k4Nnxqcwt8
@inproceedings{ peng2024qslam, title={Q-{SLAM}: Quadric Representations for Monocular {SLAM}}, author={Chensheng Peng and Chenfeng Xu and Yue Wang and Mingyu Ding and Heng Yang and Masayoshi Tomizuka and Kurt Keutzer and Marco Pavone and Wei Zhan}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=k4Nnxqcwt8} }
In this paper, we reimagine volumetric representations through the lens of quadrics. We posit that rigid scene components can be effectively decomposed into quadric surfaces. Leveraging this assumption, we reshape the volumetric representations with million of cubes by several quadric planes, which results in more accurate and efficient modeling of 3D scenes in SLAM contexts. First, we use the quadric assumption to rectify noisy depth estimations from RGB inputs. This step significantly improves depth estimation accuracy, and allows us to efficiently sample ray points around quadric planes instead of the entire volume space in previous NeRF-SLAM systems. Second, we introduce a novel quadric-decomposed transformer to aggregate information across quadrics. The quadric semantics are not only explicitly used for depth correction and scene decomposition, but also serve as an implicit supervision signal for the mapping network. Through rigorous experimental evaluation, our method exhibits superior performance over other approaches relying on estimated depth, and achieves comparable accuracy to methods utilizing ground truth depth on both synthetic and real-world datasets.
Q-SLAM: Quadric Representations for Monocular SLAM
[ "Chensheng Peng", "Chenfeng Xu", "Yue Wang", "Mingyu Ding", "Heng Yang", "Masayoshi Tomizuka", "Kurt Keutzer", "Marco Pavone", "Wei Zhan" ]
Conference
Poster
2403.08125
[ "https://github.com/PholyPeng/Q-SLAM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=k0ogr4dnhG
@inproceedings{ jia2024cluttergen, title={ClutterGen: A Cluttered Scene Generator for Robot Learning}, author={Yinsen Jia and Boyuan Chen}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=k0ogr4dnhG} }
We introduce ClutterGen, a physically compliant simulation scene generator capable of producing highly diverse, cluttered, and stable scenes for robot learning. Generating such scenes is challenging as each object must adhere to physical laws like gravity and collision. As the number of objects increases, finding valid poses becomes more difficult, necessitating significant human engineering effort, which limits the diversity of the scenes. To overcome these challenges, we propose a reinforcement learning method that can be trained with physics-based reward signals provided by the simulator. Our experiments demonstrate that ClutterGen can generate cluttered object layouts with up to ten objects on confined table surfaces. Additionally, our policy design explicitly encourages the diversity of the generated scenes for open-ended generation. Our real-world robot results show that ClutterGen can be directly used for clutter rearrangement and stable placement policy training.
ClutterGen: A Cluttered Scene Generator for Robot Learning
[ "Yinsen Jia", "Boyuan Chen" ]
Conference
Poster
2407.05425
[ "https://github.com/generalroboticslab/ClutterGen" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
http://generalroboticslab.com/ClutterGen
null
https://openreview.net/forum?id=jnubz7wB2w
@inproceedings{ hu2024verification, title={Verification of Neural Control Barrier Functions with Symbolic Derivative Bounds Propagation}, author={Hanjiang Hu and Yujie Yang and Tianhao Wei and Changliu Liu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=jnubz7wB2w} }
Control barrier functions (CBFs) are important in safety-critical systems and robot control applications. Neural networks have been used to parameterize and synthesize CBFs with bounded control input for complex systems. However, it is still challenging to verify pre-trained neural networks CBFs (neural CBFs) in an efficient symbolic manner. To this end, we propose a new efficient verification framework for ReLU-based neural CBFs through symbolic derivative bound propagation by combining the linearly bounded nonlinear dynamic system and the gradient bounds of neural CBFs. Specifically, with Heaviside step function form for derivatives of activation functions, we show that the symbolic bounds can be propagated through the inner product of neural CBF Jacobian and nonlinear system dynamics. Through extensive experiments on different robot dynamics, our results outperform the interval arithmetic-based baselines in verified rate and verification time along the CBF boundary, validating the effectiveness and efficiency of the proposed method with different model complexity. The code can be found at https://github.com/intelligent-control-lab/verify-neural-CBF.
Verification of Neural Control Barrier Functions with Symbolic Derivative Bounds Propagation
[ "Hanjiang Hu", "Yujie Yang", "Tianhao Wei", "Changliu Liu" ]
Conference
Poster
[ "https://github.com/intelligent-control-lab/verify-neural-CBF" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=jart4nhCQr
@inproceedings{ yuan2024learning, title={Learning to Manipulate Anywhere: A Visual Generalizable Framework For Reinforcement Learning}, author={Zhecheng Yuan and Tianming Wei and Shuiqi Cheng and Gu Zhang and Yuanpei Chen and Huazhe Xu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=jart4nhCQr} }
Can we endow visuomotor robots with generalization capabilities to operate in diverse open-world scenarios? In this paper, we propose Maniwhere, a generalizable framework tailored for visual reinforcement learning, enabling the trained robot policies to generalize across a combination of multiple visual disturbance types. Specifically, we introduce a multi-view representation learning approach fused with Spatial Transformer Network (STN) module to capture shared semantic information and correspondences among different viewpoints. In addition, we employ a curriculum-based randomization and augmentation approach to stabilize the RL training process and strengthen the visual generalization ability. To exhibit the effectiveness of Maniwhere, we meticulously design **8** tasks encompassing articulate objects, bi-manual, and dexterous hand manipulation tasks, demonstrating Maniwhere's strong visual generalization and sim2real transfer abilities across **3** hardware platforms. Our experiments show that Maniwhere significantly outperforms existing state-of-the-art methods. Videos are provided at https://maniwhere.github.io.
Learning to Manipulate Anywhere: A Visual Generalizable Framework For Reinforcement Learning
[ "Zhecheng Yuan", "Tianming Wei", "Shuiqi Cheng", "Gu Zhang", "Yuanpei Chen", "Huazhe Xu" ]
Conference
Poster
2407.15815
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://gemcollector.github.io/maniwhere/
null
https://openreview.net/forum?id=jPkOFAiOzf
@inproceedings{ chen2024regionaware, title={Region-aware Grasp Framework with Normalized Grasp Space for Efficient 6-DoF Grasping}, author={Siang Chen and Pengwei Xie and Wei Tang and Dingchang Hu and Yixiang Dai and Guijin Wang}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=jPkOFAiOzf} }
A series of region-based methods succeed in extracting regional features and enhancing grasp detection quality. However, faced with a cluttered scene with potential collision, the definition of the grasp-relevant region stays inconsistent. In this paper, we propose Normalized Grasp Space (NGS) from a novel region-aware viewpoint, unifying the grasp representation within a normalized regional space and benefiting the generalizability of methods. Leveraging the NGS, we find that CNNs are underestimated for 3D feature extraction and 6-DoF grasp detection in clutter scenes and build a highly efficient Region-aware Normalized Grasp Network (RNGNet). Experiments on the public benchmark show that our method achieves significant >20 % performance gains while attaining a real-time inference speed of approximately 50 FPS. Real-world cluttered scene clearance experiments underscore the effectiveness of our method. Further, human-to-robot handover and dynamic object grasping experiments demonstrate the potential of our proposed method for closed-loop grasping in dynamic scenarios.
Region-aware Grasp Framework with Normalized Grasp Space for Efficient 6-DoF Grasping
[ "Siang Chen", "Pengwei Xie", "Wei Tang", "Dingchang Hu", "Yixiang Dai", "Guijin Wang" ]
Conference
Poster
2406.01767
[ "https://github.com/THU-VCLab/RegionNormalizedGrasp" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://github.com/THU-VCLab/RegionNormalizedGrasp
null
https://openreview.net/forum?id=itKJ5uu1gW
@inproceedings{ zhang2024dynamic, title={Dynamic 3D Gaussian Tracking for Graph-Based Neural Dynamics Modeling}, author={Mingtong Zhang and Kaifeng Zhang and Yunzhu Li}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=itKJ5uu1gW} }
Videos of robots interacting with objects encode rich information about the objects' dynamics. However, existing video prediction approaches typically do not explicitly account for the 3D information from videos, such as robot actions and objects' 3D states, limiting their use in real-world robotic applications. In this work, we introduce a framework to learn object dynamics directly from multi-view RGB videos by explicitly considering the robot's action trajectories and their effects on scene dynamics. We utilize the 3D Gaussian representation of 3D Gaussian Splatting (3DGS) to train a particle-based dynamics model using Graph Neural Networks. This model operates on sparse control particles downsampled from the densely tracked 3D Gaussian reconstructions. By learning the neural dynamics model on offline robot interaction data, our method can predict object motions under varying initial configurations and unseen robot actions. The 3D transformations of Gaussians can be interpolated from the motions of control particles, enabling the rendering of predicted future object states and achieving action-conditioned video prediction. The dynamics model can also be applied to model-based planning frameworks for object manipulation tasks. We conduct experiments on various kinds of deformable materials, including ropes, clothes, and stuffed animals, demonstrating our framework's ability to model complex shapes and dynamics. Our project page is available at \url{https://gaussian-gbnd.github.io/}.
Dynamic 3D Gaussian Tracking for Graph-Based Neural Dynamics Modeling
[ "Mingtong Zhang", "Kaifeng Zhang", "Yunzhu Li" ]
Conference
Poster
2410.18912
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://gs-dynamics.github.io
null
https://openreview.net/forum?id=iZF0FRPgfq
@inproceedings{ wang2024i, title={I Can Tell What I am Doing: Toward Real-World Natural Language Grounding of Robot Experiences}, author={Zihan Wang and Brian Liang and Varad Dhat and Nick Walker and Zander Brumbaugh and Ranjay Krishna and Maya Cakmak}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=iZF0FRPgfq} }
Understanding robot behaviors and experiences through natural language is crucial for developing intelligent and transparent robotic systems. Recent advancement in large language models (LLMs) makes it possible to translate complex, multi-modal robotic experiences into coherent, human-readable narratives. However, grounding real-world robot experiences into natural language is challenging due to many reasons, such as multi-modal nature of data, differing sample rates, and data volume. We introduce RONAR, an LLM-based system that generates natural language narrations from robot experiences, aiding in behavior announcement, failure analysis, and human interaction to recover failure. Evaluated across various scenarios, RONAR outperforms state-of-the-art methods and improves failure recovery efficiency. Our contributions include a multi-modal framework for robot experience narration, a comprehensive real-robot dataset, and empirical evidence of RONAR's effectiveness in enhancing user experience in system transparency and failure analysis.
I Can Tell What I am Doing: Toward Real-World Natural Language Grounding of Robot Experiences
[ "Zihan Wang", "Brian Liang", "Varad Dhat", "Zander Brumbaugh", "Nick Walker", "Ranjay Krishna", "Maya Cakmak" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sites.google.com/view/real-world-robot-narration/home
null
https://openreview.net/forum?id=hV97HJm7Ag
@inproceedings{ qian2024taskoriented, title={Task-Oriented Hierarchical Object Decomposition for Visuomotor Control}, author={Jianing Qian and Yunshuang Li and Bernadette Bucher and Dinesh Jayaraman}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=hV97HJm7Ag} }
Good pre-trained visual representations could enable robots to learn visuomotor policy efficiently. Still, existing representations take a one-size-fits-all-tasks approach that comes with two important drawbacks: (1) Being completely task-agnostic, these representations cannot effectively ignore any task-irrelevant information in the scene, and (2) They often lack the representational capacity to handle unconstrained/complex real-world scenes. Instead, we propose to train a large combinatorial family of representations organized by scene entities: objects and object parts. This hierarchical object decomposition for task-oriented representations (HODOR) permits selectively assembling different representations specific to each task while scaling in representational capacity with the complexity of the scene and the task. In our experiments, we find that HODOR outperforms prior pre-trained representations, both scene vector representations and object-centric representations, for sample-efficient imitation learning across 5 simulated and 5 real-world manipulation tasks. We further find that the invariances captured in HODOR are inherited into downstream policies, which can robustly generalize to out-of-distribution test conditions, permitting zero-shot skill chaining. Appendix and videos: https://sites.google.com/view/ hodor-corl24
Task-Oriented Hierarchical Object Decomposition for Visuomotor Control
[ "Jianing Qian", "Yunshuang Li", "Bernadette Bucher", "Dinesh Jayaraman" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sites.google.com/view/hodor-corl24
null
https://openreview.net/forum?id=gvdXE7ikHI
@inproceedings{ zhao2024aloha, title={{ALOHA} Unleashed: A Simple Recipe for Robot Dexterity}, author={Tony Z. Zhao and Jonathan Tompson and Danny Driess and Pete Florence and Seyed Kamyar Seyed Ghasemipour and Chelsea Finn and Ayzaan Wahid}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=gvdXE7ikHI} }
Recent work has shown promising results for learning end-to-end robot policies using imitation learning. In this work we address the question of how far can we push imitation learning for challenging dexterous manipulation tasks. We show that a simple recipe of large scale data collection on the ALOHA 2 platform, combined with expressive models such as Diffusion Policies, can be effective in learning challenging bimanual manipulation tasks involving deformable objects and complex contact rich dynamics. We demonstrate our recipe on 5 challenging real-world and 3 simulated tasks and demonstrate improved performance over state-of-the-art baselines.
ALOHA Unleashed: A Simple Recipe for Robot Dexterity
[ "Tony Z. Zhao", "Jonathan Tompson", "Danny Driess", "Pete Florence", "Seyed Kamyar Seyed Ghasemipour", "Chelsea Finn", "Ayzaan Wahid" ]
Conference
Poster
2410.13126
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aloha-unleashed.github.io
null
https://openreview.net/forum?id=gqFIybpsLX
@inproceedings{ fishman2024avoid, title={Avoid Everything: Model-Free Collision Avoidance with Expert-Guided Fine-Tuning}, author={Adam Fishman and Aaron Walsman and Mohak Bhardwaj and Wentao Yuan and Balakumar Sundaralingam and Byron Boots and Dieter Fox}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=gqFIybpsLX} }
The world is full of clutter. In order to operate effectively in uncontrolled, real world spaces, robots must navigate safely by executing tasks around obstacles while in proximity to hazards. Creating safe movement for robotic manipulators remains a long-standing challenge in robotics, particularly in environments with partial observability. In partially observed settings, classical techniques often fail. Learned end-to-end motion policies can infer correct solutions in these settings, but are as-yet unable to produce reliably safe movement when close to obstacles. In this work, we introduce Avoid Everything, a novel end-to-end system for generating collision-free motion toward a target, even targets close to obstacles. Avoid Everything consists of two parts: 1) Motion Policy Transformer (M$\pi$Former), a transformer architecture for end-to-end joint space control from point clouds, trained on over 1,000,000 expert trajectories and 2) a fine-tuning procedure we call Refining on Optimized Policy Experts (ROPE), which uses optimization to provide demonstrations of safe behavior in challenging states. With these techniques, we are able to successfully solve over 63% of reaching problems that caused the previous state of the art method to fail, resulting in an overall success rate of over 91\% in challenging manipulation settings.
Avoid Everything: Model-Free Collision Avoidance with Expert-Guided Fine-Tuning
[ "Adam Fishman", "Aaron Walsman", "Mohak Bhardwaj", "Wentao Yuan", "Balakumar Sundaralingam", "Byron Boots", "Dieter Fox" ]
Conference
Poster
[ "https://github.com/fishbotics/avoid-everything" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://avoid-everything.github.io
null
https://openreview.net/forum?id=gqCQxObVz2
@inproceedings{ ke2024d, title={3D Diffuser Actor: Policy Diffusion with 3D Scene Representations}, author={Tsung-Wei Ke and Nikolaos Gkanatsios and Katerina Fragkiadaki}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=gqCQxObVz2} }
Diffusion policies are conditional diffusion models that learn robot action distributions conditioned on the robot and environment state. They have recently shown to outperform both deterministic and alternative action distribution learning formulations. 3D robot policies use 3D scene feature representations aggregated from a single or multiple camera views using sensed depth. They have shown to generalize better than their 2D counterparts across camera viewpoints. We unify these two lines of work and present 3D Diffuser Actor , a neural policy equipped with a novel 3D denoising transformer that fuses information from the 3D visual scene, a language instruction and proprioception to predict the noise in noised 3D robot pose trajectories. 3D Diffuser Actor sets a new state-of-the-art on RLBench with an absolute performance gain of 18.1% over the current SOTA on a multi-view setup and an absolute gain of 13.1% on a single-view setup. On the CALVIN benchmark, it improves over the current SOTA by a 9% relative increase. It also learns to control a robot manipulator in the real world from a handful of demonstrations. Through thorough comparisons with the current SOTA policies and ablations of our model, we show 3D Diffuser Actor ’s design choices dramatically outperform 2D representations, regression and classification objectives, absolute attentions, and holistic non-tokenized 3D scene embeddings.
3D Diffuser Actor: Policy Diffusion with 3D Scene Representations
[ "Tsung-Wei Ke", "Nikolaos Gkanatsios", "Katerina Fragkiadaki" ]
Conference
Poster
[ "https://github.com/nickgkan/3d_diffuser_actor" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://3d-diffuser-actor.github.io/
null
https://openreview.net/forum?id=fs7ia3FqUM
@inproceedings{ zhuang2024humanoid, title={Humanoid Parkour Learning}, author={Ziwen Zhuang and Shenzhe Yao and Hang Zhao}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=fs7ia3FqUM} }
Parkour is a grand challenge for legged locomotion, even for quadruped robots, requiring active perception and various maneuvers to overcome multiple challenging obstacles. Existing methods for humanoid locomotion either optimize a trajectory for a single parkour track or train a reinforcement learning policy only to walk with a significant amount of motion references. In this work, we propose a framework for learning an end-to-end vision-based whole-body-control parkour policy for humanoid robots that overcomes multiple parkour skills without any motion prior. Using the parkour policy, the humanoid robot can jump on a 0.42m platform, leap over hurdles, 0.8m gaps, and much more. It can also run at 1.8m/s in the wild and walk robustly on different terrains. We test our policy in indoor and outdoor environments to demonstrate that it can autonomously select parkour skills while following the rotation command of the joystick. We override the arm actions and show that this framework can easily transfer to humanoid mobile manipulation tasks. Videos can be found at https://humanoid4parkour.github.io
Humanoid Parkour Learning
[ "Ziwen Zhuang", "Shenzhe Yao", "Hang Zhao" ]
Conference
Poster
2406.10759
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://humanoid4parkour.github.io/
null
https://openreview.net/forum?id=fR1rCXjCQX
@inproceedings{ liu2024learning, title={Learning Compositional Behaviors from Demonstration and Language}, author={Weiyu Liu and Neil Nie and Jiayuan Mao and Ruohan Zhang and Jiajun Wu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=fR1rCXjCQX} }
We introduce Behavior from Language and Demonstration (BLADE), a framework for long-horizon robotic manipulation by integrating imitation learning and model-based planning. BLADE leverages language-annotated demonstrations, extracts abstract action knowledge from large language models (LLMs), and constructs a library of structured, high-level action representations. These representations include preconditions and effects grounded in visual perception for each high-level action, along with corresponding controllers implemented as neural network-based policies. BLADE can recover such structured representations automatically, without manually labeled states or symbolic definitions. BLADE shows significant capabilities in generalizing to novel situations, including novel initial states, external state perturbations, and novel goals. We validate the effectiveness of our approach both in simulation and on real robots with a diverse set of objects with articulated parts, partial observability, and geometric constraints.
Learning Compositional Behaviors from Demonstration and Language
[ "Weiyu Liu", "Neil Nie", "Ruohan Zhang", "Jiayuan Mao", "Jiajun Wu" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://blade-bot.github.io/
null
https://openreview.net/forum?id=fNBbEgcfwO
@inproceedings{ kim2024surgical, title={Surgical Robot Transformer ({SRT}): Imitation Learning for Surgical Subtasks}, author={Ji Woong Kim and Tony Z. Zhao and Samuel Schmidgall and Anton Deguet and Marin Kobilarov and Chelsea Finn and Axel Krieger}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=fNBbEgcfwO} }
We explore whether surgical manipulation tasks can be learned on the da Vinci robot via imitation learning. However, the da Vinci system presents unique challenges which hinder straight-forward implementation of imitation learning. Notably, its forward kinematics is inconsistent due to imprecise joint measurements, and naively training a policy using such approximate kinematics data often leads to task failure. To overcome this limitation, we introduce a relative action formulation which enables successful policy training and deployment using its approximate kinematics data. A promising outcome of this approach is that the large repository of clinical data, which contains approximate kinematics, may be directly utilized for robot learning without further corrections. We demonstrate our findings through successful execution of three fundamental surgical tasks, including tissue manipulation, needle handling, and knot-tying.
Surgical Robot Transformer (SRT): Imitation Learning for Surgical Tasks
[ "Ji Woong Kim", "Tony Z. Zhao", "Samuel Schmidgall", "Anton Deguet", "Marin Kobilarov", "Chelsea Finn", "Axel Krieger" ]
Conference
Oral
2407.12998
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://surgical-robot-transformer.github.io/
null
https://openreview.net/forum?id=fIj88Tn3fc
@inproceedings{ hejna2024remix, title={ReMix: Optimizing Data Mixtures for Large Scale Imitation Learning}, author={Joey Hejna and Chethan Anand Bhateja and Yichen Jiang and Karl Pertsch and Dorsa Sadigh}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=fIj88Tn3fc} }
Increasingly large robotics datasets are being collected to train larger foundation models in robotics. However, despite the fact that data selection has been of utmost importance to scaling in vision and natural language processing (NLP), little work in robotics has questioned what data such models should actually be trained on. In this work we investigate how to weigh different subsets or ``domains'' of robotics datasets during pre-training to maximize worst-case performance across all possible downstream domains using distributionally robust optimization (DRO). Unlike in NLP, we find that these methods are hard to apply out of the box due to varying action spaces and dynamics across robots. Our method, ReMix, employs early stopping and action normalization and discretization to counteract these issues. Through extensive experimentation on both the Bridge and OpenX datasets, we demonstrate that data curation can have an outsized impact on downstream performance. Specifically, domain weights learned by ReMix outperform uniform weights by over 40\% on average and human-selected weights by over 20\% on datasets used to train the RT-X models.
ReMix: Optimizing Data Mixtures for Large Scale Imitation Learning
[ "Joey Hejna", "Chethan Anand Bhateja", "Yichen Jiang", "Karl Pertsch", "Dorsa Sadigh" ]
Conference
Oral
[ "https://github.com/jhejna/remix" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=fDRO4NHEwZ
@inproceedings{ huang2024virl, title={{VIRL}: Self-Supervised Visual Graph Inverse Reinforcement Learning}, author={Lei Huang and Weijia Cai and Zihan Zhu and Chen Feng and Helge Rhodin and Zhengbo Zou}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=fDRO4NHEwZ} }
Learning dense reward functions from unlabeled videos for reinforcement learning exhibits scalability due to the vast diversity and quantity of video resources. Recent works use visual features or graph abstractions in videos to measure task progress as rewards, which either deteriorate in unseen domains or capture spatial information while overlooking visual details. We propose $\textbf{V}$isual-Graph $\textbf{I}$nverse $\textbf{R}$einforcement $\textbf{L}$earning (VIRL), a self-supervised method that synergizes low-level visual features and high-level graph abstractions from frames to graph representations for reward learning. VIRL utilizes a visual encoder that extracts object-wise features for graph nodes and a graph encoder that derives properties from graphs constructed from detected objects in each frame. The encoded representations are enforced to align videos temporally and reconstruct in-scene objects. The pretrained visual graph encoder is then utilized to construct a dense reward function for policy learning by measuring latent distances between current frames and the goal frame. Our empirical evaluation on the X-MAGICAL and Robot Visual Pusher benchmark demonstrates that VIRL effectively handles tasks necessitating both granular visual attention and broader global feature consideration, and exhibits robust generalization to $\textit{extrapolation}$ tasks and domains not seen in demonstrations. Our policy for the robotic task also achieves the highest success rate in real-world robot experiments.
VIRL: Self-Supervised Visual Graph Inverse Reinforcement Learning
[ "Lei Huang", "Weijia Cai", "Zihan Zhu", "Chen Feng", "Helge Rhodin", "Zhengbo Zou" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://leihhhuang.github.io/VIRL/
null
https://openreview.net/forum?id=fCDOfpTCzZ
@inproceedings{ long2024instructnav, title={InstructNav: Zero-shot System for Generic Instruction Navigation in Unexplored Environment}, author={Yuxing Long and Wenzhe Cai and Hongcheng Wang and Guanqi Zhan and Hao Dong}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=fCDOfpTCzZ} }
Enabling robots to navigate following diverse language instructions in unexplored environments is an attractive goal for human-robot interaction. However, this goal is challenging because different navigation tasks require different strategies. The scarcity of instruction navigation data hinders training an instruction navigation model with varied strategies. Therefore, previous methods are all constrained to one specific type of navigation instruction. In this work, we propose InstructNav, a generic instruction navigation system. InstructNav makes the first endeavor to handle various instruction navigation tasks without any navigation training or pre-built maps. To reach this goal, we introduce Dynamic Chain-of-Navigation (DCoN) to unify the planning process for different types of navigation instructions. Furthermore, we propose Multi-sourced Value Maps to model key elements in instruction navigation so that linguistic DCoN planning can be converted into robot actionable trajectories. With InstructNav, we complete the R2R-CE task in a zero-shot way for the first time and outperform many task-training methods. Besides, InstructNav also surpasses the previous SOTA method by 10.48% on the zero-shot Habitat ObjNav and by 86.34% on demand-driven navigation DDN. Real robot experiments on diverse indoor scenes further demonstrate our method's robustness in coping with the environment and instruction variations.
InstructNav: Zero-shot System for Generic Instruction Navigation in Unexplored Environment
[ "Yuxing Long", "Wenzhe Cai", "Hongcheng Wang", "Guanqi Zhan", "Hao Dong" ]
Conference
Poster
2406.04882
[ "https://github.com/LYX0501/InstructNav" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sites.google.com/view/instructnav
null
https://openreview.net/forum?id=fC0wWeXsVm
@inproceedings{ tirumala2024learning, title={Learning Robot Soccer from Egocentric Vision with Deep Reinforcement Learning}, author={Dhruva Tirumala and Markus Wulfmeier and Ben Moran and Sandy Huang and Jan Humplik and Guy Lever and Tuomas Haarnoja and Leonard Hasenclever and Arunkumar Byravan and Nathan Batchelor and Neil sreendra and Kushal Patel and Marlon Gwira and Francesco Nori and Martin Riedmiller and Nicolas Heess}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=fC0wWeXsVm} }
We apply multi-agent deep reinforcement learning (RL) to train end-to-end robot soccer policies with fully onboard computation and sensing via egocentric RGB vision. This setting reflects many challenges of real-world robotics, including active perception, agile full-body control, and long-horizon planning in a dynamic, partially-observable, multi-agent domain. We rely on large-scale, simulation-based data generation to obtain complex behaviors from egocentric vision which can be successfully transferred to physical robots using low-cost sensors. To achieve adequate visual realism, our simulation combines rigid-body physics with learned, realistic rendering via multiple Neural Radiance Fields (NeRFs). We combine teacher-based multi-agent RL and cross-experiment data reuse to enable the discovery of sophisticated soccer strategies. We analyze active-perception behaviors including object tracking and ball seeking that emerge when simply optimizing perception-agnostic soccer play. The agents display equivalent levels of performance and agility as policies with access to privileged, ground-truth state. To our knowledge, this paper constitutes a first demonstration of end-to-end training for multi-agent robot soccer, mapping raw pixel observations to joint-level actions that can be deployed in the real world.
Learning Robot Soccer from Egocentric Vision with Deep Reinforcement Learning
[ "Dhruva Tirumala", "Markus Wulfmeier", "Ben Moran", "Sandy Huang", "Jan Humplik", "Guy Lever", "Tuomas Haarnoja", "Leonard Hasenclever", "Arunkumar Byravan", "Nathan Batchelor", "Neil sreendra", "Kushal Patel", "Marlon Gwira", "Francesco Nori", "Martin Riedmiller", "Nicolas Heess" ]
Conference
Oral
2405.02425
[ "" ]
https://huggingface.co/papers/2405.02425
0
0
1
16
[]
[]
[]
[]
[]
[]
1
https://sites.google.com/view/vision-soccer
null
https://openreview.net/forum?id=evCXwlCMIi
@inproceedings{ levy2024learning, title={Learning to Walk from Three Minutes of Data with Semi-structured Dynamics Models}, author={Jacob Levy and Tyler Westenbroek and David Fridovich-Keil}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=evCXwlCMIi} }
Traditionally, model-based reinforcement learning (MBRL) methods exploit neural networks as flexible function approximators to represent $\textit{a priori}$ unknown environment dynamics. However, training data are typically scarce in practice, and these black-box models often fail to generalize. Modeling architectures that leverage known physics can substantially reduce the complexity of system-identification, but break down in the face of complex phenomena such as contact. We introduce a novel framework for learning semi-structured dynamics models for contact-rich systems which seamlessly integrates structured first principles modeling techniques with black-box auto-regressive models. Specifically, we develop an ensemble of probabilistic models to estimate external forces, conditioned on historical observations and actions, and integrate these predictions using known Lagrangian dynamics. With this semi-structured approach, we can make accurate long-horizon predictions with substantially less data than prior methods. We leverage this capability and propose Semi-Structured Reinforcement Learning ($\texttt{SSRL}$) a simple model-based learning framework which pushes the sample complexity boundary for real-world learning. We validate our approach on a real-world Unitree Go1 quadruped robot, learning dynamic gaits -- from scratch -- on both hard and soft surfaces with just a few minutes of real-world data. Video and code are available at: https://sites.google.com/utexas.edu/ssrl
Learning to Walk from Three Minutes of Real-World Data with Semi-structured Dynamics Models
[ "Jacob Levy", "Tyler Westenbroek", "David Fridovich-Keil" ]
Conference
Poster
2410.09163
[ "https://github.com/CLeARoboticsLab/ssrl" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sites.google.com/utexas.edu/ssrl
null
https://openreview.net/forum?id=eeoX7tCoK2
@inproceedings{ khurana2024shelfsupervised, title={Shelf-Supervised Multi-Modal Pre-Training for 3D Object Detection}, author={Mehar Khurana and Neehar Peri and James Hays and Deva Ramanan}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=eeoX7tCoK2} }
State-of-the-art 3D object detectors are often trained on massive labeled datasets. However, annotating 3D bounding boxes remains prohibitively expensive and time-consuming, particularly for LiDAR. Instead, recent works demonstrate that self-supervised pre-training with unlabeled data can improve detection accuracy with limited labels. Contemporary methods adapt best-practices for self-supervised learning from the image domain to point clouds (such as contrastive learning). However, publicly available 3D datasets are considerably smaller and less diverse than those used for image-based self-supervised learning, limiting their effectiveness. We do note, however, that such data is naturally collected in a multimodal fashion, often paired with images. Rather than pre-training with only self-supervised objectives, we argue that it is better to bootstrap point cloud representations using image-based foundation models trained on internet-scale image data. Specifically, we propose a shelf-supervised approach (e.g. supervised with off-the-shelf image foundation models) for generating zero-shot 3D bounding boxes from paired RGB and LiDAR data. Pre-training 3D detectors with such pseudo-labels yields significantly better semi-supervised detection accuracy than prior self-supervised pretext tasks. Importantly, we show that image-based shelf-supervision is helpful for training LiDAR-only and multi-modal (RGB + LiDAR) detectors. We demonstrate the effectiveness of our approach on nuScenes and WOD, significantly improving over prior work in limited data settings.
Shelf-Supervised Cross-Modal Pre-Training for 3D Object Detection
[ "Mehar Khurana", "Neehar Peri", "James Hays", "Deva Ramanan" ]
Conference
Poster
2406.10115
[ "https://github.com/meharkhurana03/cm3d" ]
https://huggingface.co/papers/2111.08276
0
0
0
3
[ "VDebugger/xvlm_retrieval_mscoco" ]
[]
[]
[ "VDebugger/xvlm_retrieval_mscoco" ]
[]
[]
1
https://meharkhurana03.github.io/cm3d/
null
https://openreview.net/forum?id=edP2dmingV
@inproceedings{ abdul-raouf2024large, title={Large Scale Mapping of Indoor Magnetic Field by Local and Sparse Gaussian Processes}, author={Iad ABDUL-RAOUF and Vincent Gay-Bellile and Cyril JOLY and Steve Bourgeois and Alexis Paljic}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=edP2dmingV} }
Magnetometer-based indoor navigation uses variations in the magnetic field to determine the robot's location. For that, a magnetic map of the environment has to be built beforehand from a collection of localized magnetic measurements. Existing solutions built on sparse Gaussian Process (GP) regression do not scale well to large environments, being either slow or resulting in discontinuous prediction. In this paper, we propose to model the magnetic field of large environments based on GP regression. We first modify a deterministic training conditional sparse GP by accounting for magnetic field physics to map small environments efficiently. We then scale the model on larger scenes by introducing a local expert aggregation framework. It splits the scene into subdomains, fits a local expert on each, and then aggregates expert predictions in a differentiable and probabilistic way. We evaluate our model on real and simulated data and show that we can smoothly map a three-story building in a few hundred milliseconds.
Large Scale Mapping of Indoor Magnetic Field by Local and Sparse Gaussian Processes
[ "Iad ABDUL-RAOUF", "Vincent Gay-Bellile", "Cyril JOLY", "Steve Bourgeois", "Alexis Paljic" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://github.com/CEA-LIST/large-scale-magnetic-mapping
null
https://openreview.net/forum?id=eU5E0oTtpS
@inproceedings{ zhang2024tag, title={Tag Map: A Text-Based Map for Spatial Reasoning and Navigation with Large Language Models}, author={Mike Zhang and Kaixian Qu and Vaishakh Patil and Cesar Cadena and Marco Hutter}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=eU5E0oTtpS} }
Large Language Models (LLM) have emerged as a tool for robots to generate task plans using common sense reasoning. For the LLM to generate actionable plans, scene context must be provided, often through a map. Recent works have shifted from explicit maps with fixed semantic classes to implicit open vocabulary maps based on queryable embeddings capable of representing any semantic class. However, embeddings cannot directly report the scene context as they are implicit, requiring further processing for LLM integration. To address this, we propose an explicit text-based map that can represent thousands of semantic classes while easily integrating with LLMs due to their text-based nature by building upon large-scale image recognition models. We study how entities in our map can be localized and show through evaluations that our text-based map localizations perform comparably to those from open vocabulary maps while using two to four orders of magnitude less memory. Real-robot experiments demonstrate the grounding of an LLM with the text-based map to solve user tasks.
Tag Map: A Text-Based Map for Spatial Reasoning and Navigation with Large Language Models
[ "Mike Zhang", "Kaixian Qu", "Vaishakh Patil", "Cesar Cadena", "Marco Hutter" ]
Conference
Poster
2409.15451
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://tag-mapping.github.io
null
https://openreview.net/forum?id=eTRncsYYdv
@inproceedings{ koirala2024solving, title={Solving Offline Reinforcement Learning with Decision Tree Regression}, author={Prajwal Koirala and Cody Fleming}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=eTRncsYYdv} }
This study presents a novel approach to addressing offline reinforcement learning (RL) problems by reframing them as regression tasks that can be effectively solved using Decision Trees. Mainly, we introduce two distinct frameworks: return-conditioned and return-weighted decision tree policies (RCDTP and RWDTP), both of which achieve notable speed in agent training as well as inference, with training typically lasting less than a few minutes. Despite the simplification inherent in this reformulated approach to offline RL, our agents demonstrate performance that is at least on par with the established methods. We evaluate our methods on D4RL datasets for locomotion and manipulation, as well as other robotic tasks involving wheeled and flying robots. Additionally, we assess performance in delayed/sparse reward scenarios and highlight the explainability of these policies through action distribution and feature importance.
Solving Offline Reinforcement Learning with Decision Tree Regression
[ "Prajwal Koirala", "Cody Fleming" ]
Conference
Poster
2401.11630
[ "https://github.com/PrajwalKoirala/Offline-Reinforcement-Learning-with-Decision-Tree-Regression/tree/main" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=eJHy0AF5TO
@inproceedings{ gao2024riemann, title={Ri{EM}ann: Near Real-Time {SE}(3)-Equivariant Robot Manipulation without Point Cloud Segmentation}, author={Chongkai Gao and Zhengrong Xue and Shuying Deng and Tianhai Liang and Siqi Yang and Lin Shao and Huazhe Xu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=eJHy0AF5TO} }
We present RiEMann, an end-to-end near Real-time SE(3)-Equivariant Robot Manipulation imitation learning framework from scene point cloud input. Compared to previous methods that rely on descriptor field matching, RiEMann directly predicts the target actions for manipulation without any object segmentation. RiEMann can efficiently train the visuomotor policy from scratch with 5 to 10 demonstrations for a manipulation task, generalizes to unseen SE(3) transformations and instances of target objects, resists visual interference of distracting objects, and follows the near real-time pose change of the target object. The scalable SE(3)-equivariant action space of RiEMann supports both pick-and-place tasks and articulated object manipulation tasks. In simulation and real-world 6-DOF robot manipulation experiments, we test RiEMann on 5 categories of manipulation tasks with a total of 25 variants and show that RiEMann outperforms baselines in both task success rates and SE(3) geodesic distance errors (reduced by 68.6%), and achieves 5.4 frames per second (fps) network inference speed.
RiEMann: Near Real-Time SE(3)-Equivariant Robot Manipulation without Point Cloud Segmentation
[ "Chongkai Gao", "Zhengrong Xue", "Shuying Deng", "Tianhai Liang", "Siqi Yang", "Lin Shao", "Huazhe Xu" ]
Conference
Poster
2403.19460
[ "https://github.com/HeegerGao/RiEMann" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://riemann-web.github.io/
null
https://openreview.net/forum?id=dsxmR6lYlg
@inproceedings{ ye2024reinforcement, title={Reinforcement Learning with Foundation Priors: Let Embodied Agent Efficiently Learn on Its Own}, author={Weirui Ye and Yunsheng Zhang and Haoyang Weng and Xianfan Gu and Shengjie Wang and Tong Zhang and Mengchen Wang and Pieter Abbeel and Yang Gao}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=dsxmR6lYlg} }
Reinforcement learning (RL) is a promising approach for solving robotic manipulation tasks. However, it is challenging to apply the RL algorithms directly in the real world. For one thing, RL is data-intensive and typically requires millions of interactions with environments, which are impractical in real scenarios. For another, it is necessary to make heavy engineering efforts to design reward functions manually. To address these issues, we leverage foundation models in this paper. We propose Reinforcement Learning with Foundation Priors (RLFP) to utilize guidance and feedback from policy, value, and success-reward foundation models. Within this framework, we introduce the Foundation-guided Actor-Critic (FAC) algorithm, which enables embodied agents to explore more efficiently with automatic reward functions. The benefits of our framework are threefold: (1) \textit{sample efficient}; (2) \textit{minimal and effective reward engineering}; (3) \textit{agnostic to foundation model forms and robust to noisy priors}. Our method achieves remarkable performances in various manipulation tasks on both real robots and in simulation. Across 5 dexterous tasks with real robots, FAC achieves an average success rate of 86\% after one hour of real-time learning. Across 8 tasks in the simulated Meta-world, FAC achieves 100\% success rates in 7/8 tasks under less than 100k frames (about 1-hour training), outperforming baseline methods with manual-designed rewards in 1M frames. We believe the RLFP framework can enable future robots to explore and learn autonomously in the physical world for more tasks.
Reinforcement Learning with Foundation Priors: Let Embodied Agent Efficiently Learn on Its Own
[ "Weirui Ye", "Yunsheng Zhang", "Haoyang Weng", "Xianfan Gu", "Shengjie Wang", "Tong Zhang", "Mengchen Wang", "Pieter Abbeel", "Yang Gao" ]
Conference
Oral
[ "https://github.com/YeWR/RLFP" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://yewr.github.io/rlfp
null
https://openreview.net/forum?id=deywgeWmL5
@inproceedings{ bae2024tldr, title={{TLDR}: Unsupervised Goal-Conditioned {RL} via Temporal Distance-Aware Representations}, author={Junik Bae and Kwanyoung Park and Youngwoon Lee}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=deywgeWmL5} }
Unsupervised goal-conditioned reinforcement learning (GCRL) is a promising paradigm for developing diverse robotic skills without external supervision. However, existing unsupervised GCRL methods often struggle to cover a wide range of states in complex environments due to their limited exploration and sparse or noisy rewards for GCRL. To overcome these challenges, we propose a novel unsupervised GCRL method that leverages TemporaL Distance-aware Representations (TLDR). Based on temporal distance, TLDR selects faraway goals to initiate exploration and computes intrinsic exploration rewards and goal-reaching rewards. Specifically, our exploration policy seeks states with large temporal distances (i.e. covering a large state space), while the goal-conditioned policy learns to minimize the temporal distance to the goal (i.e. reaching the goal). Our results in six simulated locomotion environments demonstrate that TLDR significantly outperforms prior unsupervised GCRL methods in achieving a wide range of states.
TLDR: Unsupervised Goal-Conditioned RL via Temporal Distance-Aware Representations
[ "Junik Bae", "Kwanyoung Park", "Youngwoon Lee" ]
Conference
Poster
2407.08464
[ "https://github.com/heatz123/tldr" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://heatz123.github.io/tldr/
null
https://openreview.net/forum?id=dXSGw7Cy55
@inproceedings{ anwar2024contrast, title={Contrast Sets for Evaluating Language-Guided Robot Policies}, author={Abrar Anwar and Rohan Gupta and Jesse Thomason}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=dXSGw7Cy55} }
Robot evaluations in language-guided, real world settings are time-consuming and often sample only a small space of potential instructions across complex scenes. In this work, we introduce contrast sets for robotics as an approach to make small, but specific, perturbations to otherwise independent, identically distributed (i.i.d.) test instances. We investigate the relationship between experimenter effort to carry out an evaluation and the resulting estimated test performance as well as the insights that can be drawn from performance on perturbed instances. We use contrast sets to characterize policies at reduced experimenter effort in both a simulated manipulation task and a physical robot vision-and-language navigation task. We encourage the use of contrast set evaluations as a more informative alternative to small scale, i.i.d. demonstrations on physical robots, and as a scalable alternative to industry-scale real world evaluations.
Contrast Sets for Evaluating Language-Guided Robot Policies
[ "Abrar Anwar", "Rohan Gupta", "Jesse Thomason" ]
Conference
Poster
2406.13636
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=dUo6j3YURS
@inproceedings{ wang2024mosaic, title={{MOSAIC}: Modular Foundation Models for Assistive and Interactive Cooking}, author={Huaxiaoyue Wang and Kushal Kedia and Juntao Ren and Rahma Abdullah and Atiksh Bhardwaj and Angela Chao and Kelly Y Chen and Nathaniel Chin and Prithwish Dan and Xinyi Fan and Gonzalo Gonzalez-Pumariega and Aditya Kompella and Maximus Adrian Pace and Yash Sharma and Xiangwan Sun and Neha Sunkara and Sanjiban Choudhury}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=dUo6j3YURS} }
We present MOSAIC, a modular architecture for coordinating multiple robots to (a) interact with users using natural language and (b) manipulate an open vocabulary of everyday objects. At several levels, MOSAIC employs modularity: it leverages multiple large-scale pre-trained models for high-level tasks like language and image recognition, while using streamlined modules designed for low-level task-specific control. This decomposition allows us to reap the complementary benefits of foundation models and precise, more specialized models, enabling our system to scale to complex tasks that involve coordinating multiple robots and humans. First, we unit-test individual modules with 180 episodes of visuomotor picking, 60 episodes of human motion forecasting, and 46 online user evaluations of the task planner. We then extensively evaluate MOSAIC with 60 end-to-end trials. We discuss crucial design decisions, limitations of the current system, and open challenges in this domain
MOSAIC: Modular Foundation Models for Assistive and Interactive Cooking
[ "Huaxiaoyue Wang", "Kushal Kedia", "Juntao Ren", "Rahma Abdullah", "Atiksh Bhardwaj", "Angela Chao", "Kelly Y Chen", "Nathaniel Chin", "Prithwish Dan", "Xinyi Fan", "Gonzalo Gonzalez-Pumariega", "Aditya Kompella", "Maximus Adrian Pace", "Yash Sharma", "Xiangwan Sun", "Neha Sunkara", "Sanjiban Choudhury" ]
Conference
Poster
[ "https://github.com/portal-cornell/MOSAIC/" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://portal-cornell.github.io/MOSAIC/
null
https://openreview.net/forum?id=cvVEkS5yij
@inproceedings{ wei2024metacontrol, title={Meta-Control: Automatic Model-based Control Synthesis for Heterogeneous Robot Skills}, author={Tianhao Wei and Liqian Ma and Rui Chen and Weiye Zhao and Changliu Liu}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=cvVEkS5yij} }
The requirements for real-world manipulation tasks are diverse and often conflicting; some tasks require precise motion while others require force compliance; some tasks require avoidance of certain regions while others require convergence to certain states. Satisfying these varied requirements with a fixed state-action representation and control strategy is challenging, impeding the development of a universal robotic foundation model. In this work, we propose Meta-Control, the first LLM-enabled automatic control synthesis approach that creates customized state representations and control strategies tailored to specific tasks. Our core insight is that a meta-control system can be built to automate the thought process that human experts use to design control systems. Specifically, human experts heavily use a model-based, hierarchical (from abstract to concrete) thought model, then compose various dynamic models and controllers together to form a control system. Meta-Control mimics the thought model and harnesses LLM's extensive control knowledge with Socrates' "art of midwifery" to automate the thought process. Meta-Control stands out for its fully model-based nature, allowing rigorous analysis, generalizability, robustness, efficient parameter tuning, and reliable real-time execution.
Meta-Control: Automatic Model-based Control Synthesis for Heterogeneous Robot Skills
[ "Tianhao Wei", "Liqian Ma", "Rui Chen", "Weiye Zhao", "Changliu Liu" ]
Conference
Poster
2405.11380
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
meta-control-paper.github.io
null
https://openreview.net/forum?id=cvUXoou8iz
@inproceedings{ zhou2024spire, title={{SPIRE}: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation}, author={Zihan Zhou and Animesh Garg and Dieter Fox and Caelan Reed Garrett and Ajay Mandlekar}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=cvUXoou8iz} }
Robot learning has proven to be a general and effective technique for programming manipulators. Imitation learning is able to teach robots solely from human demonstrations but is bottlenecked by the capabilities of the demonstrations. Reinforcement learning uses exploration to discover better behaviors; however, the space of possible improvements can be too large to start from scratch. And for both techniques, the learning difficulty increases proportional to the length of the manipulation task. Accounting for this, we propose SPIRE, a system that first uses Task and Motion Planning (TAMP) to decompose tasks into smaller learning subproblems and second combines imitation and reinforcement learning to maximize their strengths. We develop novel strategies to train learning agents when deployed in the context of a planning system. We evaluate SPIRE on a suite of long-horizon and contact-rich robot manipulation problems. We find that SPIRE outperforms prior approaches that integrate imitation learning, reinforcement learning, and planning by 35% to 50% in average task performance, is 6 times more data efficient in the number of human demonstrations needed to train proficient agents, and learns to complete tasks nearly twice as efficiently. View https://sites.google.com/view/spire-corl-2024 for more details.
SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation
[ "Zihan Zhou", "Animesh Garg", "Dieter Fox", "Caelan Reed Garrett", "Ajay Mandlekar" ]
Conference
Poster
2410.18065
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://sites.google.com/view/spire-corl-2024
null
https://openreview.net/forum?id=cvAIaS6V2I
@inproceedings{ iyer2024open, title={{OPEN} {TEACH}: A Versatile Teleoperation System for Robotic Manipulation}, author={Aadhithya Iyer and Zhuoran Peng and Yinlong Dai and Irmak Guzey and Siddhant Haldar and Soumith Chintala and Lerrel Pinto}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=cvAIaS6V2I} }
Open-sourced, user-friendly tools form the bedrock of scientific advancement across disciplines. The widespread adoption of data-driven learning has led to remarkable progress in multi-fingered dexterity, bimanual manipulation, and applications ranging from logistics to home robotics. However, existing data collection platforms are often proprietary, costly, or tailored to specific robotic morphologies. We present OPEN TEACH, a new teleoperation system leveraging VR headsets to immerse users in mixed reality for intuitive robot control. built on the affordable Meta Quest 3, which costs $500, OPEN TEACH enables real-time control of various robots, including multi-fingered hands, bimanual arms, and mobile manipulators, through an easy-to-use app. Using natural hand gestures and movements, users can manipulate robots at up to 90Hz with smooth visual feedback and interface widgets offering closeup environment views. We demonstrate the versatility of OPEN TEACH across 38 tasks on different robots. A comprehensive user study indicates significant improvement in teleoperation capability over the AnyTeleop framework. Further experiments exhibit that the collected data is compatible with policy learning on 10 dexterous and contact-rich manipulation tasks. Currently supporting Franka, xArm, Jaco, Allegro, and Hello Stretch platforms, OPEN TEACH is fully open-sourced to promote broader adoption. Videos are available at https://anon-open-teach.github.io/.
OPEN TEACH: A Versatile Teleoperation System for Robotic Manipulation
[ "Aadhithya Iyer", "Zhuoran Peng", "Yinlong Dai", "Irmak Guzey", "Siddhant Haldar", "Soumith Chintala", "Lerrel Pinto" ]
Conference
Poster
2403.07870
[ "https://github.com/aadhithya14/Open-Teach" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://open-teach.github.io/
null
https://openreview.net/forum?id=ctzBccpolr
@inproceedings{ chen2024roviaug, title={RoVi-Aug: Robot and Viewpoint Augmentation for Cross-Embodiment Robot Learning}, author={Lawrence Yunliang Chen and Chenfeng Xu and Karthik Dharmarajan and Kurt Keutzer and Masayoshi Tomizuka and Quan Vuong and Ken Goldberg}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=ctzBccpolr} }
Scaling up robot learning requires large and diverse datasets, and how to efficiently reuse collected data and transfer policies to new embodiments remains an open question. Emerging research such as the Open-X Embodiment (OXE) project has shown promise in leveraging skills by combining datasets including different robots. However, imbalances in the distribution of robot types and camera angles in many datasets make policies prone to overfit. To mitigate this issue, we propose RoVi-Aug, which leverages state-of-the-art image-to-image generative models to augment robot data by synthesizing demonstrations with different robots and camera views. Through extensive physical experiments, we show that, by training on robot- and viewpoint-augmented data, RoVi-Aug can zero-shot deploy on an unseen robot with significantly different camera angles. Compared to test-time adaptation algorithms such as Mirage, RoVi-Aug requires no extra processing at test time, does not assume known camera angles, and allows policy fine-tuning. Moreover, by co-training on both the original and augmented robot datasets, RoVi-Aug can learn multi-robot and multi-task policies, enabling more efficient transfer between robots and skills and improving success rates by up to 30%.
RoVi-Aug: Robot and Viewpoint Augmentation for Cross-Embodiment Robot Learning
[ "Lawrence Yunliang Chen", "Chenfeng Xu", "Karthik Dharmarajan", "Richard Cheng", "Kurt Keutzer", "Masayoshi Tomizuka", "Quan Vuong", "Ken Goldberg" ]
Conference
Oral
2409.03403
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://rovi-aug.github.io/
null
https://openreview.net/forum?id=cq2uB30uBM
@inproceedings{ kim2024preemptive, title={Pre-emptive Action Revision by Environmental Feedback for Embodied Instruction Following Agents}, author={Jinyeon Kim and Cheolhong Min and Byeonghwi Kim and Jonghyun Choi}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=cq2uB30uBM} }
When we, humans, perform a task, we consider changes in environments such as objects' arrangement due to interactions with objects and other reasons; e.g., when we find a mug to clean, if it is already clean, we skip cleaning it. But even the state-of-the-art embodied agents often ignore changed environments when performing a task, leading to failure to complete the task, executing unnecessary actions, or fixing the mistake after it was made. Here, we propose Pre-emptive Action Revision by Environmental feeDback (PRED) that allows an embodied agent to revise their action in response to the perceived environmental status before it makes mistakes. We empirically validate PRED and observe that it outperforms the prior art on two challenging benchmarks in the virtual environment, TEACh and ALFRED, by noticeable margins in most metrics, including unseen success rates, with shorter execution time, implying an efficiently behaved agent. Furthermore, we demonstrate the effectiveness of the proposed method with real robot experiments.
Pre-emptive Action Revision by Environmental Feedback for Embodied Instruction Following Agents
[ "Jinyeon Kim", "Cheolhong Min", "Byeonghwi Kim", "Jonghyun Choi" ]
Conference
Poster
[ "https://github.com/snumprlab/pred" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://pred-agent.github.io/
null
https://openreview.net/forum?id=cocHfT7CEs
@inproceedings{ shridhar2024generative, title={Generative Image as Action Models}, author={Mohit Shridhar and Yat Long Lo and Stephen James}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=cocHfT7CEs} }
Image-generation diffusion models have been fine-tuned to unlock new capabilities such as image-editing and novel view synthesis. Can we similarly unlock image-generation models for visuomotor control? We present GENIMA, a behavior-cloning agent that fine-tunes Stable Diffusion to “draw joint-actions” as targets on RGB images. These images are fed into a controller that maps the visual targets into a sequence of joint-positions. We study GENIMA on 25 RLBench and 9 real-world manipulation tasks. We find that, by lifting actions into image-space, internet pre-trained diffusion models can generate policies that outperform state- of-the-art visuomotor approaches, especially in robustness to scene perturbations and generalizing to novel objects. Our method is also competitive with 3D agents, despite lacking priors such as depth, keypoints, or motion-planners.
Generative Image as Action Models
[ "Mohit Shridhar", "Yat Long Lo", "Stephen James" ]
Conference
Poster
2407.07875
[ "https://github.com/MohitShridhar/genima" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://genima-robot.github.io/
null
https://openreview.net/forum?id=clqzoCrulY
@inproceedings{ hu2024orbitgrasp, title={OrbitGrasp: {SE}(3)-Equivariant Grasp Learning}, author={Boce Hu and Xupeng Zhu and Dian Wang and Zihao Dong and Haojie Huang and Chenghao Wang and Robin Walters and Robert Platt}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=clqzoCrulY} }
While grasp detection is an important part of any robotic manipulation pipeline, reliable and accurate grasp detection in $\\mathrm{SE}(3)$ remains a research challenge. Many robotics applications in unstructured environments such as the home or warehouse would benefit a lot from better grasp performance. This paper proposes a novel framework for detecting $\mathrm{SE}(3)$ grasp poses based on point cloud input. Our main contribution is to propose an $\mathrm{SE}(3)$-equivariant model that maps each point in the cloud to a continuous grasp quality function over the 2-sphere $S^2$ using a spherical harmonic basis. Compared with reasoning about a finite set of samples, this formulation improves the accuracy and efficiency of our model when a large number of samples would otherwise be needed. In order to accomplish this, we propose a novel variation on EquiFormerV2 that leverages a UNet-style backbone to enlarge the number of points the model can handle. Our resulting method, which we name OrbitGrasp, significantly outperforms baselines in both simulation and physical experiments.
OrbitGrasp: SE(3)-Equivariant Grasp Learning
[ "Boce Hu", "Xupeng Zhu", "Dian Wang", "Zihao Dong", "Haojie Huang", "Chenghao Wang", "Robin Walters", "Robert Platt" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://orbitgrasp.github.io/
null
https://openreview.net/forum?id=cT2N3p1AcE
@inproceedings{ liu2024visual, title={Visual Whole-Body Control for Legged Loco-Manipulation}, author={Minghuan Liu and Zixuan Chen and Xuxin Cheng and Yandong Ji and Ri-Zhao Qiu and Ruihan Yang and Xiaolong Wang}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=cT2N3p1AcE} }
We study the problem of mobile manipulation using legged robots equipped with an arm, namely legged loco-manipulation. The robot legs, while usually utilized for mobility, offer an opportunity to amplify the manipulation capabilities by conducting whole-body control. That is, the robot can control the legs and the arm at the same time to extend its workspace. We propose a framework that can conduct the whole-body control autonomously with visual observations. Our approach, namely Visual Whole-Body Control (VBC), is composed of a low-level policy using all degrees of freedom to track the body velocities along with the end-effector position, and a high-level policy proposing the velocities and end-effector position based on visual inputs. We train both levels of policies in simulation and perform Sim2Real transfer for real robot deployment. We perform extensive experiments and show significant improvements over baselines in picking up diverse objects in different configurations (heights, locations, orientations) and environments.
Visual Whole-Body Control for Legged Loco-Manipulation
[ "Minghuan Liu", "Zixuan Chen", "Xuxin Cheng", "Yandong Ji", "Ri-Zhao Qiu", "Ruihan Yang", "Xiaolong Wang" ]
Conference
Oral
2403.16967
[ "https://github.com/Ericonaldo/visual_wholebody" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://wholebody-b1.github.io/
null
https://openreview.net/forum?id=cNI0ZkK1yC
@inproceedings{ xu2024flow, title={Flow as the Cross-domain Manipulation Interface}, author={Mengda Xu and Zhenjia Xu and Yinghao Xu and Cheng Chi and Gordon Wetzstein and Manuela Veloso and Shuran Song}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=cNI0ZkK1yC} }
We present Im2Flow2Act, a scalable learning framework that enables robots to acquire real-world manipulation skills without the need of real-world robot training data. The key idea behind Im2Flow2Act is to use object flow as the manipulation interface, bridging domain gaps between different embodiments (i.e., human and robot) and training environments (i.e., real-world and simulated). Im2Flow2Act comprises two components: a flow generation network and a flow-conditioned policy. The flow generation network, trained on human demonstration videos, generates object flow from the initial scene image, conditioned on the task description. The flow-conditioned policy, trained on simulated robot play data, maps the generated object flow to robot actions to realize the desired object movements. By using flow as input, this policy can be directly deployed in the real world with a minimal sim-to-real gap. By leveraging real-world human videos and simulated robot play data, we bypass the challenges of teleoperating physical robots in the real world, resulting in a scalable system for diverse tasks. We demonstrate Im2Flow2Act's capabilities in a variety of real-world tasks, including the manipulation of rigid, articulated, and deformable objects.
Flow as the Cross-domain Manipulation Interface
[ "Mengda Xu", "Zhenjia Xu", "Yinghao Xu", "Cheng Chi", "Gordon Wetzstein", "Manuela Veloso", "Shuran Song" ]
Conference
Poster
2407.15208
[ "https://github.com/real-stanford/im2Flow2Act" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://im-flow-act.github.io
null
https://openreview.net/forum?id=cGswIOxHcN
@inproceedings{ yu2024lucidsim, title={LucidSim: Learning Agile Visual Locomotion from Generated Images}, author={Alan Yu and Ge Yang and Ran Choi and Yajvan Ravan and John Leonard and Phillip Isola}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=cGswIOxHcN} }
Fast and accurate physics simulation is an essential component of robot learning, where robots can explore failure scenarios that are difficult to produce in the real world and learn from unlimited on-policy data. Yet, it remains challenging to incorporate RGB-color perception into the sim-to-real pipeline that matches the real world in its richness and realism. In this work, we train a robot dog in simulation for visual parkour. We propose a way to use generative models to synthesize diverse and physically accurate image sequences of the scene from the robot's ego-centric perspective. We present demonstrations of zero-shot transfer to the RGB-only observations of the real world on a robot equipped with a low-cost, off-the-shelf color camera.
Learning Visual Parkour from Generated Images
[ "Alan Yu", "Ge Yang", "Ran Choi", "Yajvan Ravan", "John Leonard", "Phillip Isola" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://lucidsim.github.io
null
https://openreview.net/forum?id=cDXnnOhNrF
@inproceedings{ dixit2024perceive, title={Perceive With Confidence: Statistical Safety Assurances for Navigation with Learning-Based Perception}, author={Anushri Dixit and Zhiting Mei and Meghan Booker and Mariko Storey-Matsutani and Allen Z. Ren and Anirudha Majumdar}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=cDXnnOhNrF} }
Rapid advances in perception have enabled large pre-trained models to be used out of the box for transforming high-dimensional, noisy, and partial observations of the world into rich occupancy representations. However, the reliability of these models and consequently their safe integration onto robots remains unknown when deployed in environments unseen during training. In this work, we address this challenge by rigorously quantifying the uncertainty of pre-trained perception systems for object detection via a novel calibration technique based on conformal prediction. Crucially, this procedure guarantees robustness to distribution shifts in states when perceptual outputs are used in conjunction with a planner. As a result, the calibrated perception system can be used in combination with any safe planner to provide an end-to-end statistical assurance on safety in unseen environments. We evaluate the resulting approach, Perceive with Confidence (PwC), with experiments in simulation and on hardware where a quadruped robot navigates through previously unseen indoor, static environments. These experiments validate the safety assurances for obstacle avoidance provided by PwC and demonstrate up to 40% improvements in empirical safety compared to baselines.
Perceive With Confidence: Statistical Safety Assurances for Navigation with Learning-Based Perception
[ "Anushri Dixit", "Zhiting Mei", "Meghan Booker", "Mariko Storey-Matsutani", "Allen Z. Ren", "Anirudha Majumdar" ]
Conference
Poster
2403.08185
[ "https://github.com/irom-lab/perception-guarantees" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://perceive-with-confidence.github.io/
null
https://openreview.net/forum?id=bt0PX0e4rE
@inproceedings{ xing2024bootstrapping, title={Bootstrapping Reinforcement Learning with Imitation for Vision-Based Agile Flight}, author={Jiaxu Xing and Angel Romero and Leonard Bauersfeld and Davide Scaramuzza}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=bt0PX0e4rE} }
Learning visuomotor policies for agile quadrotor flight presents significant difficulties, primarily from inefficient policy exploration caused by high-dimensional visual inputs and the need for precise and low-latency control. To address these challenges, we propose a novel approach that combines the performance of Reinforcement Learning (RL) and the sample efficiency of Imitation Learning (IL) in the task of vision-based autonomous drone racing. While RL provides a framework for learning high-performance controllers through trial and error, it faces challenges with sample efficiency and computational demands due to the high dimensionality of visual inputs. Conversely, IL efficiently learns from visual expert demonstrations, but it remains limited by the expert's performance and state distribution. To overcome these limitations, our policy learning framework integrates the strengths of both approaches. Our framework contains three phases: training a teacher policy using RL with privileged state information, distilling it into a student policy via IL, and adaptive fine-tuning via RL. Testing in both simulated and real-world scenarios shows our approach can not only learn in scenarios where RL from scratch fails but also outperforms existing IL methods in both robustness and performance, successfully navigating a quadrotor through a race course using only visual information.
Bootstrapping Reinforcement Learning with Imitation for Vision-Based Agile Flight
[ "Jiaxu Xing", "Angel Romero", "Leonard Bauersfeld", "Davide Scaramuzza" ]
Conference
Poster
2403.12203
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://bootstrap-rl-with-il.github.io/
null
https://openreview.net/forum?id=bk28WlkqZn
@inproceedings{ huang2024dvitac, title={3D-ViTac: Learning Fine-Grained Manipulation with Visuo-Tactile Sensing}, author={Binghao Huang and Yixuan Wang and Xinyi Yang and Yiyue Luo and Yunzhu Li}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=bk28WlkqZn} }
Tactile and visual perception are both crucial for humans to perform fine-grained interactions with their environment. Developing similar multi-modal sensing capabilities for robots can significantly enhance and expand their manipulation skills. This paper introduces **3D-ViTac**, a multi-modal sensing and learning system designed for dexterous bimanual manipulation. Our system features tactile sensors equipped with dense sensing units, each covering an area of 3$mm^2$. These sensors are low-cost and flexible, providing detailed and extensive coverage of physical contacts, effectively complementing visual information. To integrate tactile and visual data, we fuse them into a unified 3D representation space that preserves their 3D structures and spatial relationships. The multi-modal representation can then be coupled with diffusion policies for imitation learning. Through concrete hardware experiments, we demonstrate that even low-cost robots can perform precise manipulations and significantly outperform vision-only policies, particularly in safe interactions with fragile items and executing long-horizon tasks involving in-hand manipulation. Our project page is available at https://binghao-huang.github.io/3D-ViTac/.
3D-ViTac: Learning Fine-Grained Manipulation with Visuo-Tactile Sensing
[ "Binghao Huang", "Yixuan Wang", "Xinyi Yang", "Yiyue Luo", "Yunzhu Li" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://binghao-huang.github.io/3D-ViTac/
null
https://openreview.net/forum?id=bftFwjSJxk
@inproceedings{ sinha2024rateinformed, title={Rate-Informed Discovery via Bayesian Adaptive Multifidelity Sampling}, author={Aman Sinha and Payam Nikdel and Supratik Paul and Shimon Whiteson}, booktitle={8th Annual Conference on Robot Learning}, year={2024}, url={https://openreview.net/forum?id=bftFwjSJxk} }
Ensuring the safety of autonomous vehicles (AVs) requires both accurate estimation of their performance and efficient discovery of potential failure cases. This paper introduces Bayesian adaptive multifidelity sampling (BAMS), which leverages the power of adaptive Bayesian sampling to achieve efficient discovery while simultaneously estimating the rate of adverse events. BAMS prioritizes exploration of regions with potentially low performance, leading to the identification of novel and critical scenarios that traditional methods might miss. Using real-world AV data we demonstrate that BAMS discovers 10 times as many issues as Monte Carlo (MC) and importance sampling (IS) baselines, while at the same time generating rate estimates with variances 15 and 6 times narrower than MC and IS baselines respectively.
Rate-Informed Discovery via Bayesian Adaptive Multifidelity Sampling
[ "Aman Sinha", "Payam Nikdel", "Supratik Paul", "Shimon Whiteson" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0