Poster
A Competitive Algorithm for Agnostic Active Learning
Yihan Zhou · Eric Price
Great Hall & Hall B1+B2 (level 1) #1622
Abstract:
For some hypothesis classes and input distributions, \emph{active} agnostic learning needs exponentially fewer samples than passive learning; for other classes and distributions, it offers little to no improvement. The most popular algorithms for agnostic active learning express their performance in terms of a parameter called the disagreement coefficient, but it is known that these algorithms are inefficient on some inputs. We take a different approach to agnostic active learning, getting an algorithm that is \emph{competitive} with the optimal algorithm for any binary hypothesis class and distribution over . In particular, if any algorithm can use queries to get error, then our algorithm uses queries to get error. Our algorithm lies in the vein of the splitting-based approach of Dasgupta [2004], which gets a similar result for the realizable () setting. We also show that it is NP-hard to do better than our algorithm's overhead in general.
Chat is not available.