Timezone: »

Reinforced Continual Learning
Ju Xu · Zhanxing Zhu

Wed Dec 05 07:45 AM -- 09:45 AM (PST) @ Room 210 #40

Most artificial intelligence models are limited in their ability to solve new tasks faster, without forgetting previously acquired knowledge. The recently emerging paradigm of continual learning aims to solve this issue, in which the model learns various tasks in a sequential fashion. In this work, a novel approach for continual learning is proposed, which searches for the best neural architecture for each coming task via sophisticatedly designed reinforcement learning strategies. We name it as Reinforced Continual Learning. Our method not only has good performance on preventing catastrophic forgetting but also fits new tasks well. The experiments on sequential classification tasks for variants of MNIST and CIFAR-100 datasets demonstrate that the proposed approach outperforms existing continual learning alternatives for deep networks.

Author Information

Ju Xu (Peking University)

Ju Xu is a graduate student majoring in Data Science at Peking University (PKU). He is now a research intern at Micorsoft Reearch Asia, focusing on autoML. When he was a undergraduate at Renmin University of China, he focus on data mining. He will graduate from Peking Unviersity in June 2020.

Zhanxing Zhu (Peking University)

More from the Same Authors