We prove that ϴ(k d^2 / ε^2) samples are necessary and sufficient for learning a mixture of k Gaussians in R^d, up to error ε in total variation distance. This improves both the known upper bounds and lower bounds for this problem. For mixtures of axis-aligned Gaussians, we show that O(k d / ε^2) samples suffice, matching a known lower bound.
The upper bound is based on a novel technique for distribution learning based on a notion of sample compression. Any class of distributions that allows such a sample compression scheme can also be learned with few samples. Moreover, if a class of distributions has such a compression scheme, then so do the classes of products and mixtures of those distributions. The core of our main result is showing that the class of Gaussians in R^d has an efficient sample compression.
Hassan Ashtiani (McMaster University)
shai Ben-David (University of Waterloo)
Nick Harvey (University of British Columbia)
Christopher Liaw (University of British Columbia)
Abbas Mehrabian (Mcgill University)
Yaniv Plan (University of British Columbia)
Related Events (a corresponding poster, oral, or spotlight)
2018 Poster: Nearly tight sample complexity bounds for learning mixtures of Gaussians via sample compression schemes »
Wed Dec 5th 03:45 -- 05:45 PM Room Room 210