Skip to yearly menu bar Skip to main content


Poster

Multi-View Reinforcement Learning

Minne Li · Lisheng Wu · Jun WANG · Haitham Bou Ammar

East Exhibition Hall B + C #208

Keywords: [ Reinforcement Learning ] [ Reinforcement Learning and Planning -> Model-Based RL; Reinforcement Learning and Planning ] [ Reinforcement Learning and Planning ]


Abstract:

This paper is concerned with multi-view reinforcement learning (MVRL), which allows for decision making when agents share common dynamics but adhere to different observation models. We define the MVRL framework by extending partially observable Markov decision processes (POMDPs) to support more than one observation model and propose two solution methods through observation augmentation and cross-view policy transfer. We empirically evaluate our method and demonstrate its effectiveness in a variety of environments. Specifically, we show reductions in sample complexities and computational time for acquiring policies that handle multi-view environments.

Live content is unavailable. Log in and register to view live content