Timezone: »
Spotlight
Fourier Sparse Leverage Scores and Approximate Kernel Learning
Tamas Erdelyi · Cameron Musco · Christopher Musco
Wed Dec 09 07:20 AM -- 07:30 AM (PST) @ Orals & Spotlights: Kernel Methods/Optimization
We prove new explicit upper bounds on the leverage scores of Fourier sparse functions under both the Gaussian and Laplace measures. In particular, we study s-sparse functions of the form $f(x) = \sum_{j=1}^s a_j e^{i \lambda_j x}$ for coefficients $a_j \in C$ and frequencies $\lambda_j \in R$. Bounding Fourier sparse leverage scores under various measures is of pure mathematical interest in approximation theory, and our work extends existing results for the uniform measure [Erd17,CP19a]. Practically, our bounds are motivated by two important applications in machine learning:
1. Kernel Approximation. They yield a new random Fourier features algorithm for approximating Gaussian and Cauchy (rational quadratic) kernel matrices. For low-dimensional data, our method uses a near optimal number of features, and its runtime is polynomial in the *statistical dimension* of the approximated kernel matrix. It is the first ``oblivious sketching method'' with this property for any kernel besides the polynomial kernel, resolving an open question of [AKM+17,AKK+20b].
2. Active Learning. They can be used as non-uniform sampling distributions for robust active learning when data follows a Gaussian or Laplace distribution. Using the framework of [AKM+19], we provide essentially optimal results for bandlimited and multiband interpolation, and Gaussian process regression. These results generalize existing work that only applies to uniformly distributed data.
Author Information
Tamas Erdelyi (Texas A&M University)
Cameron Musco (University of Massachusetts Amherst)
Christopher Musco (New York University)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Poster: Fourier Sparse Leverage Scores and Approximate Kernel Learning »
Wed. Dec 9th 05:00 -- 07:00 PM Room Poster Session 3 #988
More from the Same Authors
-
2023 Poster: No-regret Algorithms for Fair Resource Allocation »
Abhishek Sinha · Ativ Joshi · Rajarshi Bhattacharjee · Cameron Musco · Mohammad Hajiesmaili -
2023 Poster: Structured Semidefinite Programming for Recovering Structured Preconditioners »
Arun Jambulapati · Jerry Li · Christopher Musco · Kirankumar Shiragur · Aaron Sidford · Kevin Tian -
2023 Poster: Exact Representation of Sparse Networks with Symmetric Nonnegative Embeddings »
Sudhanshu Chanpuriya · Ryan Rossi · Anup Rao · Tung Mai · Nedim Lipka · Zhao Song · Cameron Musco -
2023 Poster: Finite Population Regression Adjustment and Non-asymptotic Guarantees for Treatment Effect Estimation »
Mehrdad Ghadiri · David Arbour · Tung Mai · Cameron Musco · Anup Rao -
2022 Spotlight: Kernel Interpolation with Sparse Grids »
Mohit Yadav · Daniel Sheldon · Cameron Musco -
2022 Poster: Kernel Interpolation with Sparse Grids »
Mohit Yadav · Daniel Sheldon · Cameron Musco -
2022 Poster: Modeling Transitivity and Cyclicity in Directed Graphs via Binary Code Box Embeddings »
Dongxu Zhang · Michael Boratko · Cameron Musco · Andrew McCallum -
2022 Poster: Simplified Graph Convolution with Heterophily »
Sudhanshu Chanpuriya · Cameron Musco -
2022 Poster: Sample Constrained Treatment Effect Estimation »
Raghavendra Addanki · David Arbour · Tung Mai · Cameron Musco · Anup Rao -
2021 Poster: On the Power of Edge Independent Graph Models »
Sudhanshu Chanpuriya · Cameron Musco · Konstantinos Sotiropoulos · Charalampos Tsourakakis -
2021 Poster: Coresets for Classification – Simplified and Strengthened »
Tung Mai · Cameron Musco · Anup Rao -
2020 Poster: The Statistical Cost of Robust Kernel Hyperparameter Turning »
Raphael Meyer · Christopher Musco -
2020 Poster: Node Embeddings and Exact Low-Rank Representations of Complex Networks »
Sudhanshu Chanpuriya · Cameron Musco · Konstantinos Sotiropoulos · Charalampos Tsourakakis -
2019 Poster: Toward a Characterization of Loss Functions for Distribution Learning »
Nika Haghtalab · Cameron Musco · Bo Waggoner