Skip to yearly menu bar Skip to main content


Poster

Exemplar Guided Active Learning

Jason Hartford · Kevin Leyton-Brown · Hadas Raviv · Dan Padnos · Shahar Lev · Barak Lenz

Poster Session 4 #1211

Keywords: [ Algorithms ] [ Unsupervised Learning ] [ Algorithms -> Multitask and Transfer Learning; Algorithms -> Representation Learning; Algorithms ] [ Semi-Supervised Learning; A ]


Abstract:

We consider the problem of wisely using a limited budget to label a small subset of a large unlabeled dataset. For example, consider the NLP problem of word sense disambiguation. For any word, we have a set of candidate labels from a knowledge base, but the label set is not necessarily representative of what occurs in the data: there may exist labels in the knowledge base that very rarely occur in the corpus because the sense is rare in modern English; and conversely there may exist true labels that do not exist in our knowledge base. Our aim is to obtain a classifier that performs as well as possible on examples of each “common class” that occurs with frequency above a given threshold in the unlabeled set while annotating as few examples as possible from “rare classes” whose labels occur with less than this frequency. The challenge is that we are not informed which labels are common and which are rare, and the true label distribution may exhibit extreme skew. We describe an active learning approach that (1) explicitly searches for rare classes by leveraging the contextual embedding spaces provided by modern language models, and (2) incorporates a stopping rule that ignores classes once we prove that they occur below our target threshold with high probability. We prove that our algorithm only costs logarithmically more than a hypothetical approach that knows all true label frequencies and show experimentally that incorporating automated search can significantly reduce the number of samples needed to reach target accuracy levels.

Chat is not available.