Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning for Autonomous Driving

Reinforcement Learning as an Alternative to Reachability Analysis for Falsification of AD Functions

Angel Molina Acosta · Alexander Schliep


Abstract:

Reachability analysis (RA) is one of the classical approaches to study the safety of autonomous systems, for example through falsification, the identification of initial system states which can under the right disturbances lead to unsafe or undesirable outcome states. The advantage of obtaining exact answers via RA requires analytical system models often unavailable for simulation environments for AD systems. RA suffers from rapidly rising computational costs as the dimensionality increases and ineffectiveness in dealing with nonlinearities such as saturation. Here we present an alternative in the form of a reinforcement learning (RL) approach which empirically shows good agreement with RA falsification for an Adaptive Cruise Controller, can deal with saturation, and, in preliminary data, compares favorably in computational effort against RA. Due to the choice of reward function, the RL's estimated value function provides insights into the ease of causing unsafe outcomes and allows for direct comparison with the RA falsification results.

Chat is not available.