Poster
Privately Learning Mixtures of Axis-Aligned Gaussians
Ishaq Aden-Ali · Hassan Ashtiani · Christopher Liaw
Keywords: [ Privacy ]
Abstract:
We consider the problem of learning multivariate Gaussians under the constraint of approximate differential privacy. We prove that samples are sufficient to learn a mixture of axis-aligned Gaussians in to within total variation distance while satisfying -differential privacy. This is the first result for privately learning mixtures of unbounded axis-aligned (or even unbounded univariate) Gaussians. If the covariance matrices of each of the Gaussians is the identity matrix, we show that samples are sufficient.To prove our results, we design a new technique for privately learning mixture distributions. A class of distributions is said to be list-decodable if there is an algorithm that, given "heavily corrupted" samples from , outputs a list of distributions one of which approximates . We show that if is privately list-decodable then we can learn mixtures of distributions in . Finally, we show axis-aligned Gaussian distributions are privately list-decodable, thereby proving mixtures of such distributions are privately learnable.
Chat is not available.