Timezone: »

 
Poster
Optimal Policies Tend To Seek Power
Alex Turner · Logan Smith · Rohin Shah · Andrew Critch · Prasad Tadepalli

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @

Some researchers speculate that intelligent reinforcement learning (RL) agents would be incentivized to seek resources and power in pursuit of the objectives we specify for them. Other researchers point out that RL agents need not have human-like power-seeking instincts. To clarify this discussion, we develop the first formal theory of the statistical tendencies of optimal policies. In the context of Markov decision processes, we prove that certain environmental symmetries are sufficient for optimal policies to tend to seek power over the environment. These symmetries exist in many environments in which the agent can be shut down or destroyed. We prove that in these environments, most reward functions make it optimal to seek power by keeping a range of options available and, when maximizing average reward, by navigating towards larger sets of potential terminal states.

Author Information

Alex Turner (Oregon State University)
Logan Smith (MSU)
Rohin Shah (DeepMind)

Rohin is a Research Scientist on the technical AGI safety team at DeepMind. He completed his PhD at the Center for Human-Compatible AI at UC Berkeley, where he worked on building AI systems that can learn to assist a human user, even if they don't initially know what the user wants. He is particularly interested in big picture questions about artificial intelligence. What techniques will we use to build human-level AI systems? How will their deployment affect the world? What can we do to make this deployment go better? He writes up summaries and thoughts about recent work tackling these questions in the Alignment Newsletter.

Andrew Critch (UC Berkeley)
Prasad Tadepalli (Oregon State University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors