Timezone: »
Nonnegative (linear) least square problems are a fundamental class of problems that is well-studied in statistical learning and for which solvers have been implemented in many of the standard programming languages used within the machine learning community. The existing off-the-shelf solvers view the non-negativity constraint in these problems as an obstacle and, compared to unconstrained least squares, perform additional effort to address it. However, in many of the typical applications, the data itself is nonnegative as well, and we show that the nonnegativity in this case makes the problem easier. In particular, while the worst-case dimension-independent oracle complexity of unconstrained least squares problems necessarily scales with one of the data matrix constants (typically the spectral norm) and these problems are solved to additive error, we show that nonnegative least squares problems with nonnegative data are solvable to multiplicative error and with complexity that is independent of any matrix constants. The algorithm we introduce is accelerated and based on a primal-dual perspective. We further show how to provably obtain linear convergence using adaptive restart coupled with our method and demonstrate its effectiveness on large-scale data via numerical experiments.
Author Information
Jelena Diakonikolas (University of Wisconsin-Madison)
Chenghui Li (University of Wisconsin Madison)
Swati Padmanabhan (University of Washington, Seattle)
Chaobing Song (University of Wisconsin-Madison)
More from the Same Authors
-
2023 Poster: Computing Approximate $\ell_p$ Sensitivities »
Swati Padmanabhan · David Woodruff · Richard Zhang -
2023 Poster: Near-Optimal Bounds for Learning Gaussian Halfspaces with Random Classification Noise »
Ilias Diakonikolas · Jelena Diakonikolas · Daniel Kane · Puqian Wang · Nikos Zarifis -
2023 Poster: Block-Coordinate Methods and Restarting for Solving Extensive-Form Games »
Darshan Chakrabarti · Jelena Diakonikolas · Christian Kroer -
2023 Poster: Robust Second-Order Nonconvex Optimization and Its Application to Low Rank Matrix Sensing »
Shuyao Li · Yu Cheng · Ilias Diakonikolas · Jelena Diakonikolas · Rong Ge · Stephen Wright -
2023 Poster: Large sample spectral analysis of graph-based multi-manifold clustering »
Nicolas Garcia Trillos · Pengfei He · Chenghui Li -
2022 Panel: Panel 1A-3: A gradient sampling… & Local Bayesian optimization… »
Swati Padmanabhan · Quan Nguyen -
2022 Workshop: Order up! The Benefits of Higher-Order Optimization in Machine Learning »
Albert Berahas · Jelena Diakonikolas · Jarad Forristal · Brandon Reese · Martin Takac · Yan Xu -
2022 Poster: Stochastic Halpern Iteration with Variance Reduction for Stochastic Monotone Inclusions »
Xufeng Cai · Chaobing Song · Cristóbal Guzmán · Jelena Diakonikolas -
2022 Poster: A gradient sampling method with complexity guarantees for Lipschitz functions in high and low dimensions »
Damek Davis · Dmitriy Drusvyatskiy · Yin Tat Lee · Swati Padmanabhan · Guanghao Ye -
2022 Poster: Decomposable Non-Smooth Convex Optimization with Nearly-Linear Gradient Oracle Complexity »
Sally Dong · Haotian Jiang · Yin Tat Lee · Swati Padmanabhan · Guanghao Ye -
2022 Poster: Coordinate Linear Variance Reduction for Generalized Linear Programming »
Chaobing Song · Cheuk Yin Lin · Stephen Wright · Jelena Diakonikolas -
2020 Poster: Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization »
Chaobing Song · Yong Jiang · Yi Ma -
2020 Poster: Optimistic Dual Extrapolation for Coherent Non-monotone Variational Inequalities »
Chaobing Song · Zhengyuan Zhou · Yichao Zhou · Yong Jiang · Yi Ma -
2020 Poster: Learning Diverse and Discriminative Representations via the Principle of Maximal Coding Rate Reduction »
Yaodong Yu · Kwan Ho Ryan Chan · Chong You · Chaobing Song · Yi Ma