Timezone: »
Poster
Privately Learning Mixtures of Axis-Aligned Gaussians
Ishaq Aden-Ali · Hassan Ashtiani · Christopher Liaw
We consider the problem of learning multivariate Gaussians under the constraint of approximate differential privacy. We prove that $\widetilde{O}(k^2 d \log^{3/2}(1/\delta) / \alpha^2 \varepsilon)$ samples are sufficient to learn a mixture of $k$ axis-aligned Gaussians in $\mathbb{R}^d$ to within total variation distance $\alpha$ while satisfying $(\varepsilon, \delta)$-differential privacy. This is the first result for privately learning mixtures of unbounded axis-aligned (or even unbounded univariate) Gaussians. If the covariance matrices of each of the Gaussians is the identity matrix, we show that $\widetilde{O}(kd/\alpha^2 + kd \log(1/\delta) / \alpha \varepsilon)$ samples are sufficient.To prove our results, we design a new technique for privately learning mixture distributions. A class of distributions $\mathcal{F}$ is said to be list-decodable if there is an algorithm that, given "heavily corrupted" samples from $f \in \mathcal{F}$, outputs a list of distributions one of which approximates $f$. We show that if $\mathcal{F}$ is privately list-decodable then we can learn mixtures of distributions in $\mathcal{F}$. Finally, we show axis-aligned Gaussian distributions are privately list-decodable, thereby proving mixtures of such distributions are privately learnable.
Author Information
Ishaq Aden-Ali (McMaster University)
Hassan Ashtiani (McMaster University)
Christopher Liaw (Google)
More from the Same Authors
-
2020 : On the Sample Complexity of Privately Learning Unbounded High-Dimensional Gaussians »
Ishaq Aden-Ali -
2022 Poster: Benefits of Additive Noise in Composing Classes with Bounded Capacity »
Alireza Fathollah Pour · Hassan Ashtiani -
2023 Poster: On the Role of Noise in the Sample Complexity of Learning Recurrent Neural Networks: Exponential Gaps for Long Sequences »
Alireza Fathollah Pour · Hassan Ashtiani -
2022 Spotlight: Benefits of Additive Noise in Composing Classes with Bounded Capacity »
Alireza Fathollah Pour · Hassan Ashtiani -
2022 Spotlight: Lightning Talks 2A-1 »
Caio Kalil Lauand · Ryan Strauss · Yasong Feng · lingyu gu · Alireza Fathollah Pour · Oren Mangoubi · Jianhao Ma · Binghui Li · Hassan Ashtiani · Yongqi Du · Salar Fattahi · Sean Meyn · Jikai Jin · Nisheeth Vishnoi · zengfeng Huang · Junier B Oliva · yuan zhang · Han Zhong · Tianyu Wang · John Hopcroft · Di Xie · Shiliang Pu · Liwei Wang · Robert Qiu · Zhenyu Liao -
2020 Poster: Improved Algorithms for Online Submodular Maximization via First-order Regret Bounds »
Nicholas Harvey · Christopher Liaw · Tasuku Soma -
2019 Poster: Disentangled behavioural representations »
Amir Dezfouli · Hassan Ashtiani · Omar Ghattas · Richard Nock · Peter Dayan · Cheng Soon Ong -
2018 Poster: Nearly tight sample complexity bounds for learning mixtures of Gaussians via sample compression schemes »
Hassan Ashtiani · Shai Ben-David · Nicholas Harvey · Christopher Liaw · Abbas Mehrabian · Yaniv Plan -
2018 Oral: Nearly tight sample complexity bounds for learning mixtures of Gaussians via sample compression schemes »
Hassan Ashtiani · Shai Ben-David · Nicholas Harvey · Christopher Liaw · Abbas Mehrabian · Yaniv Plan