Timezone: »

Enhancing Transfer of Reinforcement Learning Agents with Abstract Contextual Embeddings
Guy Azran · Mohamad Hosein Danesh · Stefano Albrecht · Sarah Keren

Deep reinforcement learning (DRL) algorithms have seen great success in performing a plethora of tasks, but often have trouble adapting to changes in the environment. We address this issue by using {\em reward machines} (RM), a graph-based abstraction of the underlying task to represent the current setting or {\em context}. Using a graph neural network (GNN), we embed the RMs into deep latent vector representations and provide it to the agent to enhance its ability to adapt to new contexts. To the best of our knowledge, this is the first work to embed contextual abstractions and let the agent decide how to use them. Our preliminary empirical evaluation demonstrates improved sample efficiency of our approach upon context transfer on a set of grid navigation tasks.