Timezone: »
We propose a new method for approximating active learning acquisition strategies that are based on retraining with hypothetically-labeled candidate data points. Although this is usually infeasible with deep networks, we use the neural tangent kernel to approximate the result of retraining, and prove that this approximation works asymptotically even in an active learning setup -- approximating look-ahead'' selection criteria with far less computation required. This also enables us to conduct sequential active learning, i.e.\ updating the model in a streaming regime, without needing to retrain the model with SGD after adding each new data point. Moreover, our querying strategy, which better understands how the model's predictions will change by adding new data points in comparison to the standard (
myopic'') criteria, beats other look-ahead strategies by large margins, and achieves equal or better performance compared to state-of-the-art methods on several benchmark datasets in pool-based active learning.
Author Information
Mohamad Amin Mohamadi (University of British Columbia)
Wonho Bae (University of British Columbia)
Danica J. Sutherland (University of British Columbia)
More from the Same Authors
-
2022 Poster: A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized Linear Models »
Lijia Zhou · Frederic Koehler · Pragya Sur · Danica J. Sutherland · Nati Srebro -
2022 Poster: Evaluating Graph Generative Models with Contrastively Learned Features »
Hamed Shirzad · Kaveh Hassani · Danica J. Sutherland -
2021 Oral: Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds and Benign Overfitting »
Frederic Koehler · Lijia Zhou · Danica J. Sutherland · Nathan Srebro -
2021 Poster: Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds and Benign Overfitting »
Frederic Koehler · Lijia Zhou · Danica J. Sutherland · Nathan Srebro -
2021 Poster: Self-Supervised Learning with Kernel Dependence Maximization »
Yazhe Li · Roman Pogodin · Danica J. Sutherland · Arthur Gretton -
2021 Poster: Meta Two-Sample Testing: Learning Kernels for Testing with Limited Data »
Feng Liu · Wenkai Xu · Jie Lu · Danica J. Sutherland -
2020 Poster: On Uniform Convergence and Low-Norm Interpolation Learning »
Lijia Zhou · Danica J. Sutherland · Nati Srebro -
2020 Spotlight: On Uniform Convergence and Low-Norm Interpolation Learning »
Lijia Zhou · Danica J. Sutherland · Nati Srebro -
2019 Tutorial: Interpretable Comparison of Distributions and Models »
Wittawat Jitkrittum · Danica J. Sutherland · Arthur Gretton -
2018 Poster: On gradient regularizers for MMD GANs »
Michael Arbel · Danica J. Sutherland · Mikołaj Bińkowski · Arthur Gretton