Skip to yearly menu bar Skip to main content


Poster

Correlation Priors for Reinforcement Learning

Bastian Alt · Adrian Šošić · Heinz Koeppl

East Exhibition Hall B, C #186

Keywords: [ Reinforcement Learning and Planning ] [ Probabilistic Methods; Probabilistic Methods -> Variational Inference; Reinforcement Learning and Planning ] [ Decision and Cont ]


Abstract:

Many decision-making problems naturally exhibit pronounced structures inherited from the characteristics of the underlying environment. In a Markov decision process model, for example, two distinct states can have inherently related semantics or encode resembling physical state configurations. This often implies locally correlated transition dynamics among the states. In order to complete a certain task in such environments, the operating agent usually needs to execute a series of temporally and spatially correlated actions. Though there exists a variety of approaches to capture these correlations in continuous state-action domains, a principled solution for discrete environments is missing. In this work, we present a Bayesian learning framework based on Pólya-Gamma augmentation that enables an analogous reasoning in such cases. We demonstrate the framework on a number of common decision-making related problems, such as imitation learning, subgoal extraction, system identification and Bayesian reinforcement learning. By explicitly modeling the underlying correlation structures of these problems, the proposed approach yields superior predictive performance compared to correlation-agnostic models, even when trained on data sets that are an order of magnitude smaller in size.

Live content is unavailable. Log in and register to view live content