Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Gaze Meets ML

Planning by Active Sensing

Kaushik Lakshminarasimhan · Seren Zhu · Dora Angelaki

[ ] [ Project Page ]
Sat 16 Dec 9:15 a.m. PST — 9:30 a.m. PST
 
presentation: Gaze Meets ML
Sat 16 Dec 6:15 a.m. PST — 3 p.m. PST

Abstract:

Flexible behavior requires rapid planning, but planning requires a good internal model of the environment. Learning this model by trial-and-error is impractical when acting in complex environments. How do humans plan action sequences efficiently when there is uncertainty about model components? To address this, we asked human participants to navigate complex mazes in virtual reality. We found that the paths taken to gather rewards were close to optimal even though participants had no prior knowledge of these environments. Based on the sequential eye movement patterns observed when participants mentally compute a path before navigating, we develop an algorithm that is capable of rapidly planning under uncertainty by active sensing i.e., visually sampling information about the structure of the environment. ew eye movements are chosen in an iterative manner by following the gradient of a dynamic value map which is updated based on the previous eye movement, until the planning process reaches convergence. In addition to bearing hallmarks of human navigational planning, the proposed algorithm is sample-efficient such that the number of visual samples needed for planning scales linearly with the path length regardless of the size of the state space.

Chat is not available.