Timezone: »
Methods for learning from demonstration (LfD) have shown success in acquiring behavior policies by imitating a user. However, even for a single task, LfD may require numerous demonstrations. For versatile agents that must learn many tasks via demonstration, this process would substantially burden the user if each task were learned in isolation. To address this challenge, we introduce the novel problem of lifelong learning from demonstration, which allows the agent to continually build upon knowledge learned from previously demonstrated tasks to accelerate the learning of new tasks, reducing the amount of demonstrations required. As one solution to this problem, we propose the first lifelong learning approach to inverse reinforcement learning, which learns consecutive tasks via demonstration, continually transferring knowledge between tasks to improve performance.
Author Information
Jorge Mendez (University of Pennsylvania)
Shashank Shivkumar (University of Pennsylvania)
Eric Eaton (University of Pennsylvania)
More from the Same Authors
-
2022 Affinity Workshop: LatinX in AI »
Maria Luisa Santiago · Juan Banda · CJ Barberan · MIGUEL GONZALEZ-MENDOZA · Caio Davi · Sara Garcia · Jorge Diaz · Fanny Nina Paravecino · Carlos Miranda · Gissella Bejarano Nicho · Fabian Latorre · Andres Munoz Medina · Abraham Ramos · Laura Montoya · Isabel Metzger · Andres Marquez · Miguel Felipe Arevalo-Castiblanco · Jorge Mendez · Karla Caballero · Atnafu Lambebo Tonja · Germán Olivo · Karla Caballero Barajas · Francisco Zabala -
2020 Poster: Lifelong Policy Gradient Learning of Factored Policies for Faster Training Without Forgetting »
Jorge Mendez · Boyu Wang · Eric Eaton -
2019 Poster: Transfer Learning via Minimizing the Performance Gap Between Domains »
Boyu Wang · Jorge Mendez · Mingbo Cai · Eric Eaton