Timezone: »

Deep Active Learning by Leveraging Training Dynamics
Haonan Wang · Wei Huang · Ziwei Wu · Hanghang Tong · Andrew J Margenot · Jingrui He

Wed Nov 30 02:00 PM -- 04:00 PM (PST) @ Hall J #219

Active learning theories and methods have been extensively studied in classical statistical learning settings. However, deep active learning, i.e., active learning with deep learning models, is usually based on empirical criteria without solid theoretical justification, thus suffering from heavy doubts when some of those fail to provide benefits in applications. In this paper, by exploring the connection between the generalization performance and the training dynamics, we propose a theory-driven deep active learning method (dynamicAL) which selects samples to maximize training dynamics. In particular, we prove that the convergence speed of training and the generalization performance is positively correlated under the ultra-wide condition and show that maximizing the training dynamics leads to a better generalization performance. Furthermore, to scale up to large deep neural networks and data sets, we introduce two relaxations for the subset selection problem and reduce the time complexity from polynomial to constant. Empirical results show that dynamicAL not only outperforms the other baselines consistently but also scales well on large deep learning models. We hope our work inspires more attempts in bridging the theoretical findings of deep networks and practical impacts in deep active learning applications.

Author Information

Haonan Wang (national university of singaore, National University of Singapore)
Wei Huang (RIKEN AIP)

32/255-271 Anzac Parade Kingsford NSW 2032

Ziwei Wu (University of Illinois at Urbana-Champaign)
Hanghang Tong (University of Illinois at Urbana-Champaign)
Andrew J Margenot (University of Illinois)
Jingrui He (University of Illinois at Urbana-Champaign)

More from the Same Authors