Active Learning Through a Covering Lens

Ofer Yehuda · Avihu Dekel · Guy Hacohen · Daphna Weinshall

Hall J #103

Keywords: [ ProbCover ] [ max cover ] [ Deep active learning ] [ Active learning theory ] [ low budget ] [ probability cover ] [ AL ] [ Active Learning ]

[ Abstract ]
[ Paper [ OpenReview
Wed 30 Nov 2 p.m. PST — 4 p.m. PST


Deep active learning aims to reduce the annotation cost for the training of deep models, which is notoriously data-hungry. Until recently, deep active learning methods were ineffectual in the low-budget regime, where only a small number of examples are annotated. The situation has been alleviated by recent advances in representation and self-supervised learning, which impart the geometry of the data representation with rich information about the points. Taking advantage of this progress, we study the problem of subset selection for annotation through a “covering” lens, proposing ProbCover – a new active learning algorithm for the low budget regime, which seeks to maximize Probability Coverage. We then describe a dual way to view the proposed formulation, from which one can derive strategies suitable for the high budget regime of active learning, related to existing methods like Coreset. We conclude with extensive experiments, evaluating ProbCover in the low-budget regime. We show that our principled active learning strategy improves the state-of-the-art in the low-budget regime in several image recognition benchmarks. This method is especially beneficial in the semi-supervised setting, allowing state-of-the-art semi-supervised methods to match the performance of fully supervised methods, while using much fewer labels nonetheless. Code is available at

Chat is not available.