Skip to yearly menu bar Skip to main content


Oral
in
Workshop: 5th Robot Learning Workshop: Trustworthy Robotics

Imitating careful experts to avoid catastrophic events

Jack Hanslope · Laurence Aitchison


Abstract:

Reinforcement learning (RL) is increasingly being used to control robotic systems that interact closely with humans. This interaction raises the problem of safe RL: how to ensure that an RL-controlled robotic system never, for instance, injures a human. This problem is especially challenging in rich, realistic settings where it is not even possible to clearly write down a reward function which incorporates these outcomes. In these circumstances, perhaps the only viable approach is based on inverse reinforcement learning (IRL), which infers rewards from human demonstrations. However, IRL is massively underdetermined as many different rewards can lead to the same optimal policies; we show that this makes it difficult to distinguish catastrophic outcomes (such as injuring a human) from merely undesirable outcomes. Our key insight is that humans do display different behaviour when catastrophic outcomes are possible: they become much more careful. We incorporate carefulness signals into IRL, and find that they do indeed allow IRL to disambiguate undesirable from catastrophic outcomes, which is critical to ensuring safety in future real-world human-robot interactions.

Chat is not available.