Timezone: »

Recovery of sparse linear classifiers from mixture of responses
Venkata Gandikota · Arya Mazumdar · Soumyabrata Pal

Wed Dec 09 09:00 PM -- 11:00 PM (PST) @ Poster Session 4 #1243

In the problem of learning a mixture of linear classifiers, the aim is to learn a collection of hyperplanes from a sequence of binary responses. Each response is a result of querying with a vector and indicates the side of a randomly chosen hyperplane from the collection the query vector belong to. This model is quite rich while dealing with heterogeneous data with categorical labels and has only been studied in some special settings. We look at a hitherto unstudied problem of query complexity upper bound of recovering all the hyperplanes, especially for the case when the hyperplanes are sparse. This setting is a natural generalization of the extreme quantization problem known as 1-bit compressed sensing. Suppose we have a set of l unknown k-sparse vectors. We can query the set with another vector a, to obtain the sign of the inner product of a and a randomly chosen vector from the l-set. How many queries are sufficient to identify all the l unknown vectors? This question is significantly more challenging than both the basic 1-bit compressed sensing problem (i.e., l = 1 case) and the analogous regression problem (where the value instead of the sign is provided). We provide rigorous query complexity results (with efficient algorithms) for this problem.

Author Information

Venkata Gandikota (Syracuse University)
Arya Mazumdar (University of California, San Diego)
Soumyabrata Pal (University of Massachusetts Amherst)

I am a fourth year grad student in the Department of Computer Science at the University of Massachusetts Amherst.

More from the Same Authors