Towards Reliable & Trustworthy Embodied AI in Everyday Scenarios
May 19th, 9:00-17:40 ET
Georgia World Congress Center, Meeting Room 311, Atlanta, GA
May 19th, 9:00-17:40 ET
Georgia World Congress Center, Meeting Room 311, Atlanta, GA
This workshop explores the reliability and trustworthiness of embodied AI systems in real-world deployments across a variety of everyday scenarios. It brings together tools and methodologies from fields such as computer vision, machine learning, planning, control, and formal methods, all aimed at improving AI robustness and dependability. The workshop will also highlight current challenges, identifying tasks where robots still struggle to perform consistently, with the goal of advancing AI and robotic systems toward greater versatility and trustworthiness. The scope includes a broad spectrum of embodied AI applications, ranging from aerial and ground vehicles to legged robots and beyond. So far, the workshop features 10 speakers representing a range of fields related to the workshop's central theme. These speakers bring diverse expertise and backgrounds, contributing valuable perspectives to the discussion.
9:10–9:50 — Alfred Chen: How Much Do We Need to Worry about Embodied AI Security Problems in Practice? A Systems Security Perspective for Embodied AI Technologies
Abstract: TBD
9:50–10:30 — Sebastian Scherer: The Safety Case for Embodied Intelligence
10:30–11:10 — Hao Su: Autonomous Wearable Robots for Everyone and Everywhere via Learning-in-Simulation and High-Torque Motors
Abstract: Can we design wearable robots for everyone, everywhere? This talk presents our work, recently published in Nature, on a data-driven, physics-informed reinforcement learning framework that accelerates control policy development through learning in simulation, significantly reducing wearable robot development time. Our learning-in-simulation controllers bridge the sim-to-real gap, enabling autonomous control of portable exoskeletons for walking, running, and stair climbing—leading to significant energy savings for users. Additionally, this talk introduces a new design paradigm that leverages high-torque-density motors to electrify robotic actuation, enabling synergistic human-robot interaction. Our advancements in bionic limbs enhance mobility and manipulation for individuals with musculoskeletal and neurological impairments. We envision these innovations driving a paradigm shift in wearable robotics, transforming them from lab-bound rehabilitation devices into ubiquitous personal robots for everyone, everywhere - in applications such as workplace injury prevention, pediatric and elderly rehabilitation, home care, and sports performance enhancement.
Abstract: For robots to operate safely in everyday scenarios, avoiding collisions — i.e., ensuring geometric safety — is not enough. Robots must also understand their surroundings and reason about the implications of their actions. We call this broader notion semantic safety. For example, fulfilling the command “please bring me a glass of water” safely means ensuring not only object retrieval but also preventing water spillage. In this talk, I outline a progression:
Geometry-constrained control and learning, where constraints are predefined offline (e.g., from maps);
Online constraint learning, where geometry is inferred from perception during operation;
Semantic safety, where safe actions are derived from high-level semantic understanding using vision and language.
I will conclude by highlighting the open challenges in achieving semantic safety in dynamic, unpredictable environments — the next frontier in robot learning.
Abstract: Large-scale robot learning, colloquially known as Large Behavior Models (LBMs), Embodied Foundation Models (EFMs), or Vision-Language-Action (VLA) models, has become increasingly the norm in robot learning literature since the success of ChatGPT. Nevertheless, there are many questions that remain surrounding their development in the context of real-world, embodied systems. For example, we need rigorous statistical methodologies for evaluation and comparison of existing robot learning models. To deploy these models in human environments, they should be equipped with reliable failure detection systems despite the challenge of immeasurable failure types and conditions during deployment. Lastly, these models should have the capacity to explore and adapt to new environments, preferences, and tasks. In this talk, I will overview our work as part of the Trustworthy Learning under Uncertainty (TLU) effort at TRI along a few of these research directions, focussing on evaluation and failure detection.
12:30–14:00 — Lunch Break
Abstract: TBD
Abstract: TBD
Abstract: TBD
16:00–16:40 — Soon-Jo Chung: Monte Carlo tree search with spectral expansion for planning with dynamical systems
Abstract: Autonomous robots require effective decision-making processes to adapt to complex new environments. Monte Carlo tree search (MCTS) is a planning algorithm that uses real-time computation to strategically explore future decisions, but it cannot be directly applied to generate physical motions for dynamical systems or robots. We have now developed Spectral Expansion Tree Search that enables real-time MCTS-based planning by computing an efficient discrete representation of the physical world. The framework was deployed on various robots that were shown to be capable of autonomously discovering optimal trajectories to avoid dynamic obstacles, traverse windy gusts, support a human driver in shared control tasks, and capture and redirect an uncontrolled agent.
Abstract: TBD
University of Michigan
UC Irvine
New York University
Technical University of Munich
Toyota Research Institute
MIT
Stanford University
Caltech
Stanford University
Iowa State University
National University of Singapore
Technical University of Munich
Stanford University
Stanford University