Skip to yearly menu bar Skip to main content


Workshop

5th Robot Learning Workshop: Trustworthy Robotics

Alex Bewley · Roberto Calandra · Anca Dragan · Igor Gilitschenski · Emily Hannigan · Masha Itkina · Hamidreza Kasaei · Jens Kober · Danica Kragic · Nathan Lambert · Julien PEREZ · Fabio Ramos · Ransalu Senanayake · Jonathan Tompson · Vincent Vanhoucke · Markus Wulfmeier

Virtual

Fri 9 Dec, 7 a.m. PST

Machine learning (ML) has been one of the premier drivers of recent advances in robotics research and has made its way into impacting several real-world robotic applications in unstructured and human-centric environments, such as transportation, healthcare, and manufacturing. At the same time, robotics has been a key motivation for numerous research problems in artificial intelligence research, from efficient algorithms to robust generalization of decision models. However, there are still considerable obstacles to fully leveraging state-of-the-art ML in real-world robotics applications. For capable robots equipped with ML models, guarantees on the robustness and additional analysis of the social implications of these models are required for their utilization in real-world robotic domains that interface with humans (e.g. autonomous vehicles, and tele-operated or assistive robots).

To support the development of robots that are safely deployable among humans, the field must consider trustworthiness as a central aspect in the development of real-world robot learning systems. Unlike many other applications of ML, the combined complexity of physical robotic platforms and learning-based perception-action loops presents unique technical challenges. These challenges include concrete technical problems such as very high performance requirements, explainability, predictability, verification, uncertainty quantification, and robust operation in dynamically distributed, open-set domains. Since robots are developed for use in human environments, in addition to these technical challenges, we must also consider the social aspects of robotics such as privacy, transparency, fairness, and algorithmic bias. Both technical and social challenges also present opportunities for robotics and ML researchers alike. Contributing to advances in the aforementioned sub-fields promises to have an important impact on real-world robot deployment in human environments, building towards robots that use human feedback, indicate when their model is uncertain, and are safe to operate autonomously in safety-critical settings such as healthcare and transportation.

This year’s robot learning workshop aims at discussing unique research challenges from the lens of trustworthy robotics. We adopt a broad definition of trustworthiness that highlights different application domains and the responsibility of the robotics and ML research communities to develop “robots for social good.” Bringing together experts with diverse backgrounds from the ML and robotics communities, the workshop will offer new perspectives on trust in the context of ML-driven robot systems.

Scope of contributions:

Specific areas of interest include but are not limited to:

* epistemic uncertainty estimation in robotics;
* explainable robot learning;
* domain adaptation and distribution shift in robot learning;
* multi-modal trustworthy sensing and sensor fusion;
* safe deployment for applications such as agriculture, space, science, and healthcare;
* privacy aware robotic perception;
* information system security in robot learning;
* learning from offline data and safe on-line learning;
* simulation-to-reality transfer for safe deployment;
* robustness and safety evaluation;
* certifiability and performance guarantees;
* robotics for social good;
* safe robot learning with humans in the loop;
* algorithmic bias in robot learning;
* ethical robotics.

Chat is not available.
Timezone: America/Los_Angeles

Schedule