Timezone: »

(Almost) No Label No Cry
Giorgio Patrini · Richard Nock · Tiberio Caetano · Paul Rivera

Wed Dec 10 04:00 PM -- 08:59 PM (PST) @ Level 2, room 210D

In Learning with Label Proportions (LLP), the objective is to learn a supervised classifier when, instead of labels, only label proportions for bags of observations are known. This setting has broad practical relevance, in particular for privacy preserving data processing. We first show that the mean operator, a statistic which aggregates all labels, is minimally sufficient for the minimization of many proper scoring losses with linear (or kernelized) classifiers without using labels. We provide a fast learning algorithm that estimates the mean operator via a manifold regularizer with guaranteed approximation bounds. Then, we present an iterative learning algorithm that uses this as initialization. We ground this algorithm in Rademacher-style generalization bounds that fit the LLP setting, introducing a generalization of Rademacher complexity and a Label Proportion Complexity measure. This latter algorithm optimizes tractable bounds for the corresponding bag-empirical risk. Experiments are provided on fourteen domains, whose size ranges up to 300K observations. They display that our algorithms are scalable and tend to consistently outperform the state of the art in LLP. Moreover, in many cases, our algorithms compete with or are just percents of AUC away from the Oracle that learns knowing all labels. On the largest domains, half a dozen proportions can suffice, i.e. roughly 40K times less than the total number of labels.

Author Information

Giorgio Patrini (Australian National University / NICTA)
Richard Nock (Data61, the Australian National University and the University of Sydney)
Tiberio Caetano (NICTA Canberra)
Paul Rivera

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors