Timezone: »
A family of learning algorithms generated from additive models have attracted much attention recently for their flexibility and interpretability in high dimensional data analysis. Among them, learning models with grouped variables have shown competitive performance for prediction and variable selection. However, the previous works mainly focus on the least squares regression problem, not the classification task. Thus, it is desired to design the new additive classification model with variable selection capability for many real-world applications which focus on high-dimensional data classification. To address this challenging problem, in this paper, we investigate the classification with group sparse additive models in reproducing kernel Hilbert spaces. A novel classification method, called as \emph{group sparse additive machine} (GroupSAM), is proposed to explore and utilize the structure information among the input variables. Generalization error bound is derived and proved by integrating the sample error analysis with empirical covering numbers and the hypothesis error estimate with the stepping stone technique. Our new bound shows that GroupSAM can achieve a satisfactory learning rate with polynomial decay. Experimental results on synthetic data and seven benchmark datasets consistently show the effectiveness of our new approach.
Author Information
Hong Chen (University of Pittsburgh)
Xiaoqian Wang (University of Pittsburgh)
Cheng Deng (School of Electronic Engineering, Xidian University, China)
Heng Huang (University of Pittsburgh)
More from the Same Authors
-
2022 : FedGRec: Federated Graph Recommender System with Lazy Update of Latent Embeddings »
Junyi Li · Heng Huang -
2022 : Cooperation or Competition: Avoiding Player Domination for Multi-target Robustness by Adaptive Budgets »
Yimu Wang · Dinghuai Zhang · Yihan Wu · Heng Huang · Hongyang Zhang -
2022 Poster: MetricFormer: A Unified Perspective of Correlation Exploring in Similarity Learning »
Jiexi Yan · Erkun Yang · Cheng Deng · Heng Huang -
2022 Poster: Enhanced Bilevel Optimization via Bregman Distance »
Feihu Huang · Junyi Li · Shangqian Gao · Heng Huang -
2021 Poster: Optimal Underdamped Langevin MCMC Method »
Zhengmian Hu · Feihu Huang · Heng Huang -
2021 Poster: Fast Training Method for Stochastic Compositional Optimization Problems »
Hongchang Gao · Heng Huang -
2021 Poster: SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients »
Feihu Huang · Junyi Li · Heng Huang -
2021 Poster: Efficient Mirror Descent Ascent Methods for Nonsmooth Minimax Problems »
Feihu Huang · Xidong Wu · Heng Huang -
2021 Poster: Generalized and Discriminative Few-Shot Object Detection via SVD-Dictionary Enhancement »
Aming WU · Suqi Zhao · Cheng Deng · Wei Liu -
2021 Poster: A Faster Decentralized Algorithm for Nonconvex Minimax Problems »
Wenhan Xian · Feihu Huang · Yanfu Zhang · Heng Huang -
2019 Poster: Curvilinear Distance Metric Learning »
Shuo Chen · Lei Luo · Jian Yang · Chen Gong · Jun Li · Heng Huang -
2018 Poster: Bilevel Distance Metric Learning for Robust Image Recognition »
Jie Xu · Lei Luo · Cheng Deng · Heng Huang -
2018 Poster: Training Neural Networks Using Features Replay »
Zhouyuan Huo · Bin Gu · Heng Huang -
2018 Spotlight: Training Neural Networks Using Features Replay »
Zhouyuan Huo · Bin Gu · Heng Huang -
2017 Poster: Regularized Modal Regression with Applications in Cognitive Impairment Prediction »
Xiaoqian Wang · Hong Chen · Weidong Cai · Dinggang Shen · Heng Huang -
2017 Poster: Learning A Structured Optimal Bipartite Graph for Co-Clustering »
Feiping Nie · Xiaoqian Wang · Cheng Deng · Heng Huang