Timezone: »
Nonlinear kernels can be approximated using finite-dimensional feature maps for efficient risk minimization. Due to the inherent trade-off between the dimension of the (mapped) feature space and the approximation accuracy, the key problem is to identify promising (explicit) features leading to a satisfactory out-of-sample performance. In this work, we tackle this problem by efficiently choosing such features from multiple kernels in a greedy fashion. Our method sequentially selects these explicit features from a set of candidate features using a correlation metric. We establish an out-of-sample error bound capturing the trade-off between the error in terms of explicit features (approximation error) and the error due to spectral properties of the best model in the Hilbert space associated to the combined kernel (spectral error). The result verifies that when the (best) underlying data model is sparse enough, i.e., the spectral error is negligible, one can control the test error with a small number of explicit features, that can scale poly-logarithmically with data. Our empirical results show that given a fixed number of explicit features, the method can achieve a lower test error with a smaller time cost, compared to the state-of-the-art in data-dependent random features.
Author Information
Shahin Shahrampour (Texas A&M University)
Vahid Tarokh (Duke University)
More from the Same Authors
-
2021 : Benchmarking Data-driven Surrogate Simulators for Artificial Electromagnetic Materials »
Yang Deng · Juncheng Dong · Simiao Ren · Omar Khatib · Mohammadreza Soltani · Vahid Tarokh · Willie Padilla · Jordan Malof -
2022 Poster: GAL: Gradient Assisted Learning for Decentralized Multi-Organization Collaborations »
Enmao Diao · Jie Ding · Vahid Tarokh -
2022 Poster: Inference and Sampling for Archimax Copulas »
Yuting Ng · Ali Hasan · Vahid Tarokh -
2022 Poster: SemiFL: Semi-Supervised Federated Learning for Unlabeled Clients with Alternate Training »
Enmao Diao · Jie Ding · Vahid Tarokh -
2020 Poster: Statistical and Topological Properties of Sliced Probability Divergences »
Kimia Nadjahi · Alain Durmus · Lénaïc Chizat · Soheil Kolouri · Shahin Shahrampour · Umut Simsekli -
2020 Spotlight: Statistical and Topological Properties of Sliced Probability Divergences »
Kimia Nadjahi · Alain Durmus · Lénaïc Chizat · Soheil Kolouri · Shahin Shahrampour · Umut Simsekli -
2019 Poster: Gradient Information for Representation and Modeling »
Jie Ding · Robert Calderbank · Vahid Tarokh -
2019 Poster: SpiderBoost and Momentum: Faster Variance Reduction Algorithms »
Zhe Wang · Kaiyi Ji · Yi Zhou · Yingbin Liang · Vahid Tarokh -
2017 Poster: On Optimal Generalizability in Parametric Learning »
Ahmad Beirami · Meisam Razaviyayn · Shahin Shahrampour · Vahid Tarokh -
2013 Poster: Online Learning of Dynamic Parameters in Social Networks »
Shahin Shahrampour · Sasha Rakhlin · Ali Jadbabaie