Timezone: »

EAGER: Asking and Answering Questions for Automatic Reward Shaping in Language-guided RL
Thomas Carta · Pierre-Yves Oudeyer · Olivier Sigaud · Sylvain Lamprier

Thu Dec 01 02:00 PM -- 04:00 PM (PST) @ Hall J #916

Reinforcement learning (RL) in long horizon and sparse reward tasks is notoriously difficult and requires a lot of training steps. A standard solution to speed up the process is to leverage additional reward signals, shaping it to better guide the learning process.In the context of language-conditioned RL, the abstraction and generalisation properties of the language input provide opportunities for more efficient ways of shaping the reward.In this paper, we leverage this idea and propose an automated reward shaping method where the agent extracts auxiliary objectives from the general language goal. These auxiliary objectives use a question generation (QG) and a question answering (QA) system: they consist of questions leading the agent to try to reconstruct partial information about the global goal using its own trajectory.When it succeeds, it receives an intrinsic reward proportional to its confidence in its answer. This incentivizes the agent to generate trajectories which unambiguously explain various aspects of the general language goal.Our experimental study using various BabyAI environments shows that this approach, which does not require engineer intervention to design the auxiliary objectives, improves sample efficiency by effectively directing the exploration.

Author Information

Thomas Carta (INRIA)

Ph.D. Candidate in the Inria Flowers Team on Language-guided autonomous deep reinforcement learning agents. Supervisors: Mr Pierre-Yves Oudeyer (Inria Bordeaux Flowers Team), Mr Olivier Sigaud (ISIR Amac Team) and, Mr Sylvain Lamprier (Anger Université).

Pierre-Yves Oudeyer (INRIA)
Olivier Sigaud (Sorbonne University)
Sylvain Lamprier (LIP6-UPMC)

More from the Same Authors