`

Timezone: »

 
Poster
When Is Generalizable Reinforcement Learning Tractable?
Dhruv Malik · Yuanzhi Li · Pradeep Ravikumar

Wed Dec 08 04:30 PM -- 06:00 PM (PST) @ None #None

Agents trained by reinforcement learning (RL) often fail to generalize beyond the environment they were trained in, even when presented with new scenarios that seem similar to the training environment. We study the query complexity required to train RL agents that generalize to multiple environments. Intuitively, tractable generalization is only possible when the environments are similar or close in some sense. To capture this, we introduce Weak Proximity, a natural structural condition that requires the environments to have highly similar transition and reward functions and share a policy providing optimal value. Despite such shared structure, we prove that tractable generalization is impossible in the worst case. This holds even when each individual environment can be efficiently solved to obtain an optimal linear policy, and when the agent possesses a generative model. Our lower bound applies to the more complex task of representation learning for efficient generalization to multiple environments. On the positive side, we introduce Strong Proximity, a strengthened condition which we prove is sufficient for efficient generalization.

Author Information

Dhruv Malik (Carnegie Mellon University)
Yuanzhi Li (CMU)
Pradeep Ravikumar (Carnegie Mellon University)

More from the Same Authors