Timezone: »
Reinforcement learning (RL) is increasingly being used to control robotic systems that interact closely with humans. This interaction raises the problem of safe RL: how to ensure that an RL-controlled robotic system never, for instance, injures a human. This problem is especially challenging in rich, realistic settings where it is not even possible to clearly write down a reward function which incorporates these outcomes. In these circumstances, perhaps the only viable approach is based on inverse reinforcement learning (IRL), which infers rewards from human demonstrations. However, IRL is massively underdetermined as many different rewards can lead to the same optimal policies; we show that this makes it difficult to distinguish catastrophic outcomes (such as injuring a human) from merely undesirable outcomes. Our key insight is that humans do display different behaviour when catastrophic outcomes are possible: they become much more careful. We incorporate carefulness signals into IRL, and find that they do indeed allow IRL to disambiguate undesirable from catastrophic outcomes, which is critical to ensuring safety in future real-world human-robot interactions.
Author Information
Jack Hanslope (University of Bristol)
Laurence Aitchison (University of Bristol)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 : Imitating careful experts to avoid catastrophic events »
Dates n/a. Room
More from the Same Authors
-
2022 : Gaussian Process parameterized Covariance Kernels for Non-stationary Regression »
Vidhi Lalchand · Talay Cheema · Laurence Aitchison · Carl Edward Rasmussen -
2022 : Deep learning for downscaling tropical cyclone rainfall »
Emily Vosper · Lucy Harris · Andrew McRae · Laurence Aitchison · Peter Watson · Raul Santos-Rodriguez · Dann Mitchell -
2022 : Machine learning emulation of a local-scale UK climate model »
Henry Addison · Elizabeth Kendon · Suman Ravuri · Peter Watson · Laurence Aitchison