Timezone: »
We propose firefly neural architecture descent, a general framework for progressively and dynamically growing neural networks to jointly optimize the networks' parameters and architectures. Our method works in a steepest descent fashion, which iteratively finds the best network within a functional neighborhood of the original network that includes a diverse set of candidate network structures. By using Taylor approximation, the optimal network structure in the neighborhood can be found with a greedy selection procedure. We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures that avoid catastrophic forgetting in continual learning. Empirically, firefly descent achieves promising results on both neural architecture search and continual learning. In particular, on a challenging continual image classification task, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
Author Information
Lemeng Wu (UT Austin)
Bo Liu (University of Texas at Austin)
Peter Stone (The University of Texas at Austin, Sony AI)
Qiang Liu (UT Austin)
More from the Same Authors
-
2020 Poster: Stein Self-Repulsive Dynamics: Benefits From Past Samples »
Mao Ye · Tongzheng Ren · Qiang Liu -
2020 Poster: Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework »
Dinghuai Zhang · Mao Ye · Chengyue Gong · Zhanxing Zhu · Qiang Liu -
2020 Poster: Certified Monotonic Neural Networks »
Xingchao Liu · Xing Han · Na Zhang · Qiang Liu -
2020 Spotlight: Certified Monotonic Neural Networks »
Xingchao Liu · Xing Han · Na Zhang · Qiang Liu -
2020 Poster: Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough »
Mao Ye · Lemeng Wu · Qiang Liu -
2020 Poster: An Imitation from Observation Approach to Transfer Learning with Dynamics Mismatch »
Siddharth Desai · Ishan Durugkar · Haresh Karnan · Garrett Warnell · Josiah Hanna · Peter Stone -
2020 Poster: Off-Policy Interval Estimation with Lipschitz Value Iteration »
Ziyang Tang · Yihao Feng · Na Zhang · Jian Peng · Qiang Liu -
2019 Poster: A Kernel Loss for Solving the Bellman Equation »
Yihao Feng · Lihong Li · Qiang Liu -
2019 Poster: Splitting Steepest Descent for Growing Neural Architectures »
Lemeng Wu · Dilin Wang · Qiang Liu -
2019 Spotlight: Splitting Steepest Descent for Growing Neural Architectures »
Lemeng Wu · Dilin Wang · Qiang Liu -
2019 Poster: Stein Variational Gradient Descent With Matrix-Valued Kernels »
Dilin Wang · Ziyang Tang · Chandrajit Bajaj · Qiang Liu -
2019 Poster: Exploration via Hindsight Goal Generation »
Zhizhou Ren · Kefan Dong · Yuan Zhou · Qiang Liu · Jian Peng -
2018 Poster: Variational Inference with Tail-adaptive f-Divergence »
Dilin Wang · Hao Liu · Qiang Liu -
2018 Oral: Variational Inference with Tail-adaptive f-Divergence »
Dilin Wang · Hao Liu · Qiang Liu -
2018 Poster: Breaking the Curse of Horizon: Infinite-Horizon Off-Policy Estimation »
Qiang Liu · Lihong Li · Ziyang Tang · Denny Zhou -
2018 Spotlight: Breaking the Curse of Horizon: Infinite-Horizon Off-Policy Estimation »
Qiang Liu · Lihong Li · Ziyang Tang · Denny Zhou -
2018 Poster: Stein Variational Gradient Descent as Moment Matching »
Qiang Liu · Dilin Wang -
2015 Workshop: Learning, Inference and Control of Multi-Agent Systems »
Vicenç Gómez · Gerhard Neumann · Jonathan S Yedidia · Peter Stone