Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Machine Learning Safety

System III: Learning with Domain Knowledge for Safety Constraints

Fazl Barez · Hosein Hasanbeig · Alessandro Abate


Abstract:

Reinforcement learning agents naturally learn from extensive exploration. Exploration is costly and can be unsafe in safety-critical domains. This paper proposes a novel framework for incorporating domain knowledge to help guide safe exploration and boost sample efficiency. Previous approaches impose constraints, such as regularisation parameters in neural networks, that rely on large sample sets and often are not suitable for safety-critical domains where agents should almost always avoid unsafe actions. In our approach, called System III, which is inspired by psychologists' notions of the brain's System I and System IIwe represent domain expert knowledge of safety in form of first-order logic. We evaluate the satisfaction of these constraints via p-norms in state vector space. In our formulation, constraints are analogous to hazards, objects, and regions of state that have to be avoided during exploration.We evaluated the effectiveness of the proposed method on OpenAI's Gym and Safety-Gym environments.In all tasks, including classic Control and Safety Games, we show that our approach results in safer exploration and sample efficiency.

Chat is not available.