Timezone: »
Reward function specification can be difficult. Rewarding the agent for making a widget may be easy, but penalizing the multitude of possible negative side effects is hard. In toy environments, Attainable Utility Preservation (AUP) avoided side effects by penalizing shifts in the ability to achieve randomly generated goals. We scale this approach to large, randomly generated environments based on Conway's Game of Life. By preserving optimal value for a single randomly generated reward function, AUP incurs modest overhead while leading the agent to complete the specified task and avoid many side effects. Videos and code are available at https://avoiding-side-effects.github.io/.
Author Information
Alex Turner (Oregon State University)
Neale Ratzlaff (Oregon State University)
## Hi, I'm Neale I'm a 4th year Ph.D. candidate at Oregon State University studying machine (deep) learning. I'm interested in _Bayesian deep learning_, _reinforcement learning_, and _generative models_. Specifically, I am studying how best to fit a posterior of model parameters in high dimensional settings. I want to improve the short-comings in current Bayesian deep learning approaches, and applying improved methods for exploration in reinforcement leanring, and anomaly detection in vision tasks. I'm open to both internships and possible full-time positions.
Prasad Tadepalli (Oregon State University)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Poster: Avoiding Side Effects in Complex Environments »
Wed. Dec 9th 05:00 -- 07:00 AM Room Poster Session 2 #598
More from the Same Authors
-
2021 Spotlight: Optimal Policies Tend To Seek Power »
Alex Turner · Logan Smith · Rohin Shah · Andrew Critch · Prasad Tadepalli -
2021 : Deep RePReL--Combining Planning and Deep RL for acting in relational domains »
Harsha Kokel · Arjun Manoharan · Sriraam Natarajan · Balaraman Ravindran · Prasad Tadepalli -
2021 Poster: One Explanation is Not Enough: Structured Attention Graphs for Image Classification »
Vivswan Shitole · Fuxin Li · Minsuk Kahng · Prasad Tadepalli · Alan Fern -
2021 Poster: Optimal Policies Tend To Seek Power »
Alex Turner · Logan Smith · Rohin Shah · Andrew Critch · Prasad Tadepalli -
2013 Poster: Symbolic Opportunistic Policy Iteration for Factored-Action MDPs »
Aswin Raghavan · Roni Khardon · Alan Fern · Prasad Tadepalli -
2012 Poster: A Bayesian Approach for Policy Learning from Trajectory Preference Queries »
Aaron Wilson · Alan Fern · Prasad Tadepalli -
2011 Poster: Autonomous Learning of Action Models for Planning »
Neville Mehta · Prasad Tadepalli · Alan Fern -
2011 Poster: Inverting Grice's Maxims to Learn Rules from Natural Language Extractions »
M. Shahed Sorower · Thomas Dietterich · Janardhan Rao Doppa · Walker Orr · Prasad Tadepalli · Xiaoli Fern -
2010 Poster: A Computational Decision Theory for Interactive Assistants »
Alan Fern · Prasad Tadepalli