Timezone: »
I will discuss a route to general AI where a general-purpose agent (which must have a complex, high-dimensional sensorimotor space) first autonomously learns abstract, task-specific representations - that reflects the complexity of the particular task the agent is currently solving, and not the agent itself - and then applies an appropriate generic solution method to the resulting abstract task. I will argue that such a representation can be learned via a combination of state- and action-abstractions. I will present my group's recent progress on learning abstract actions in the form of high-level options or skills. I will then consider the question of how to learn a compatible abstract state representation, taking a constructivist approach, where the computation the representation is required to support - here, planning using a set of (learned or given) skills - is precisely defined, and then its properties are used to build a representation capable of doing so by construction. The result is a formal link between state and action abstractins. I will present an example of a robot autonomously learning a (sound and complete) abstract representation directly from sensorimotor data, and then using it to plan.
Author Information
George Konidaris (Brown University)
More from the Same Authors
-
2021 : Bayesian Exploration for Lifelong Reinforcement Learning »
Haotian Fu · Shangqun Yu · Michael Littman · George Konidaris -
2022 Spotlight: Evaluation beyond Task Performance: Analyzing Concepts in AlphaZero in Hex »
Charles Lovering · Jessica Forde · George Konidaris · Ellie Pavlick · Michael Littman -
2022 Poster: Effects of Data Geometry in Early Deep Learning »
Saket Tiwari · George Konidaris -
2022 Poster: Evaluation beyond Task Performance: Analyzing Concepts in AlphaZero in Hex »
Charles Lovering · Jessica Forde · George Konidaris · Ellie Pavlick · Michael Littman -
2022 Poster: Model-based Lifelong Reinforcement Learning with Bayesian Exploration »
Haotian Fu · Shangqun Yu · Michael Littman · George Konidaris -
2021 : George Konidaris Talk Q&A »
George Konidaris -
2021 Poster: Learning Markov State Abstractions for Deep Reinforcement Learning »
Cameron Allen · Neev Parikh · Omer Gottesman · George Konidaris -
2020 : Panel Discussions »
Grace Lindsay · George Konidaris · Shakir Mohamed · Kimberly Stachenfeld · Peter Dayan · Yael Niv · Doina Precup · Catherine Hartley · Ishita Dasgupta -
2020 : Invited Talk #4 QnA - George Konidaris »
George Konidaris · Raymond Chua · Feryal Behbahani -
2020 : Invited Talk #4 George Konidaris - Signal to Symbol (via Skills) »
George Konidaris -
2017 Poster: Active Exploration for Learning Symbolic Representations »
Garrett Andersen · George Konidaris -
2017 Poster: Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes »
Taylor Killian · Samuel Daulton · Finale Doshi-Velez · George Konidaris -
2017 Oral: Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes »
Taylor Killian · Samuel Daulton · Finale Doshi-Velez · George Konidaris -
2015 Poster: Policy Evaluation Using the Ω-Return »
Philip Thomas · Scott Niekum · Georgios Theocharous · George Konidaris -
2011 Poster: TD_gamma: Re-evaluating Complex Backups in Temporal Difference Learning »
George Konidaris · Scott Niekum · Philip Thomas -
2010 Poster: Constructing Skill Trees for Reinforcement Learning Agents from Demonstration Trajectories »
George Konidaris · Scott R Kuindersma · Andrew G Barto · Roderic A Grupen -
2009 Poster: Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining »
George Konidaris · Andrew G Barto -
2009 Spotlight: Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining »
George Konidaris · Andrew G Barto