Timezone: »
Poster
Efficient Active Learning with Abstention
Yinglun Zhu · Robert Nowak
The goal of active learning is to achieve the same accuracy achievable by passive learning, while using much fewer labels. Exponential savings in terms of label complexity have been proved in very special cases, but fundamental lower bounds show that such improvements are impossible in general. This suggests a need to explore alternative goals for active learning. Learning with abstention is one such alternative. In this setting, the active learning algorithm may abstain from prediction and incur an error that is marginally smaller than random guessing. We develop the first computationally efficient active learning algorithm with abstention. Our algorithm provably achieves $\mathsf{polylog}(\frac{1}{\varepsilon})$ label complexity, without any low noise conditions. Such performance guarantee reduces the label complexity by an exponential factor, relative to passive learning and active learning that is not allowed to abstain. Furthermore, our algorithm is guaranteed to only abstain on hard examples (where the true label distribution is close to a fair coin), a novel property we term \emph{proper abstention} that also leads to a host of other desirable characteristics (e.g., recovering minimax guarantees in the standard setting, and avoiding the undesirable ``noise-seeking'' behavior often seen in active learning). We also provide novel extensions of our algorithm that achieve \emph{constant} label complexity and deal with model misspecification.
Author Information
Yinglun Zhu (University of Wisconsin, Madison)
Robert Nowak (University of Wisconsion-Madison)
More from the Same Authors
-
2022 : A Better Way to Decay: Proximal Gradient Training Algorithms for Neural Nets »
Liu Yang · Jifan Zhang · Joseph Shenouda · Dimitris Papailiopoulos · Kangwook Lee · Robert Nowak -
2022 : Panel »
Mayee Chen · Alexander Ratner · Robert Nowak · Cody Coleman · Ramya Korlakai Vinayak -
2022 Poster: Active Learning with Neural Networks: Insights from Nonparametric Statistics »
Yinglun Zhu · Robert Nowak -
2022 Poster: One for All: Simultaneous Metric and Preference Learning over Multiple Users »
Gregory Canal · Blake Mason · Ramya Korlakai Vinayak · Robert Nowak -
2021 Poster: Pure Exploration in Kernel and Neural Bandits »
Yinglun Zhu · Dongruo Zhou · Ruoxi Jiang · Quanquan Gu · Rebecca Willett · Robert Nowak -
2020 : Dataset Curation via Active Learning »
Robert Nowak -
2020 Poster: On Regret with Multiple Best Arms »
Yinglun Zhu · Robert Nowak -
2020 Poster: Finding All $\epsilon$-Good Arms in Stochastic Bandits »
Blake Mason · Lalit Jain · Ardhendu Tripathy · Robert Nowak -
2019 Poster: Learning Nearest Neighbor Graphs from Noisy Distance Samples »
Blake Mason · Ardhendu Tripathy · Robert Nowak -
2019 Poster: MaxGap Bandit: Adaptive Algorithms for Approximate Ranking »
Sumeet Katariya · Ardhendu Tripathy · Robert Nowak -
2017 Poster: Scalable Generalized Linear Bandits: Online Computation and Hashing »
Kwang-Sung Jun · Aniruddha Bhargava · Robert Nowak · Rebecca Willett -
2017 Poster: A KL-LUCB algorithm for Large-Scale Crowdsourcing »
Ervin Tanczos · Robert Nowak · Bob Mankoff -
2017 Poster: Learning Low-Dimensional Metrics »
Blake Mason · Lalit Jain · Robert Nowak