Timezone: »
Latent models are a fundamental modeling tool in machine learning applications, but they present significant computational and analytical challenges. The popular EM algorithm and its variants, is a much used algorithmic tool; yet our rigorous understanding of its performance is highly incomplete. Recently, work in [1] has demonstrated that for an important class of problems, EM exhibits linear local convergence. In the high-dimensional setting, however, the M-step may not be well defined. We address precisely this setting through a unified treatment using regularization. While regularization for high-dimensional problems is by now well understood, the iterative EM algorithm requires a careful balancing of making progress towards the solution while identifying the right structure (e.g., sparsity or low-rank). In particular, regularizing the M-step using the state-of-the-art high-dimensional prescriptions (e.g., `a la [19]) is not guaranteed to provide this balance. Our algorithm and analysis are linked in a way that reveals the balance between optimization and statistical errors. We specialize our general framework to sparse gaussian mixture models, high-dimensional mixed regression, and regression with missing variables, obtaining statistical guarantees for each of these examples.
Author Information
Xinyang Yi (Utaustin)
Constantine Caramanis (UT Austin)
More from the Same Authors
-
2021 Spotlight: RL for Latent MDPs: Regret Guarantees and a Lower Bound »
Jeongyeol Kwon · Yonathan Efroni · Constantine Caramanis · Shie Mannor -
2021 : Reinforcement Learning in Reward-Mixing MDPs »
Jeongyeol Kwon · Yonathan Efroni · Constantine Caramanis · Shie Mannor -
2022 Poster: Tractable Optimality in Episodic Latent MABs »
Jeongyeol Kwon · Yonathan Efroni · Constantine Caramanis · Shie Mannor -
2022 Poster: Non-Stationary Bandits under Recharging Payoffs: Improved Planning with Sublinear Regret »
Orestis Papadigenopoulos · Constantine Caramanis · Sanjay Shakkottai -
2021 Poster: RL for Latent MDPs: Regret Guarantees and a Lower Bound »
Jeongyeol Kwon · Yonathan Efroni · Constantine Caramanis · Shie Mannor -
2021 Poster: Recurrent Submodular Welfare and Matroid Blocking Semi-Bandits »
Orestis Papadigenopoulos · Constantine Caramanis -
2021 Poster: Reinforcement Learning in Reward-Mixing MDPs »
Jeongyeol Kwon · Yonathan Efroni · Constantine Caramanis · Shie Mannor -
2020 Poster: Second Order Optimality in Decentralized Non-Convex Optimization via Perturbed Gradient Tracking »
Isidoros Tziotis · Constantine Caramanis · Aryan Mokhtari -
2020 Poster: Mix and Match: An Optimistic Tree-Search Approach for Learning Models from Mixture Distributions »
Matthew Faw · Rajat Sen · Karthikeyan Shanmugam · Constantine Caramanis · Sanjay Shakkottai -
2020 Poster: Applications of Common Entropy for Causal Inference »
Murat Kocaoglu · Sanjay Shakkottai · Alex Dimakis · Constantine Caramanis · Sriram Vishwanath -
2020 Poster: Robust compressed sensing using generative models »
Ajil Jalal · Liu Liu · Alex Dimakis · Constantine Caramanis -
2019 Poster: Primal-Dual Block Generalized Frank-Wolfe »
Qi Lei · JIACHENG ZHUO · Constantine Caramanis · Inderjit Dhillon · Alex Dimakis -
2016 Poster: Fast Algorithms for Robust PCA via Gradient Descent »
Xinyang Yi · Dohyung Park · Yudong Chen · Constantine Caramanis -
2016 Poster: More Supervision, Less Computation: Statistical-Computational Tradeoffs in Weakly Supervised Learning »
Xinyang Yi · Zhaoran Wang · Zhuoran Yang · Constantine Caramanis · Han Liu -
2015 Poster: Optimal Linear Estimation under Unknown Nonlinear Transform »
Xinyang Yi · Zhaoran Wang · Constantine Caramanis · Han Liu -
2014 Poster: Greedy Subspace Clustering »
Dohyung Park · Constantine Caramanis · Sujay Sanghavi -
2013 Poster: Memory Limited, Streaming PCA »
Ioannis Mitliagkas · Constantine Caramanis · Prateek Jain