Timezone: »

Group Sparse Additive Machine
Hong Chen · Xiaoqian Wang · Cheng Deng · Heng Huang

Tue Dec 05 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #9

A family of learning algorithms generated from additive models have attracted much attention recently for their flexibility and interpretability in high dimensional data analysis. Among them, learning models with grouped variables have shown competitive performance for prediction and variable selection. However, the previous works mainly focus on the least squares regression problem, not the classification task. Thus, it is desired to design the new additive classification model with variable selection capability for many real-world applications which focus on high-dimensional data classification. To address this challenging problem, in this paper, we investigate the classification with group sparse additive models in reproducing kernel Hilbert spaces. A novel classification method, called as \emph{group sparse additive machine} (GroupSAM), is proposed to explore and utilize the structure information among the input variables. Generalization error bound is derived and proved by integrating the sample error analysis with empirical covering numbers and the hypothesis error estimate with the stepping stone technique. Our new bound shows that GroupSAM can achieve a satisfactory learning rate with polynomial decay. Experimental results on synthetic data and seven benchmark datasets consistently show the effectiveness of our new approach.

Author Information

Hong Chen (University of Pittsburgh)
Xiaoqian Wang (University of Pittsburgh)
Cheng Deng (School of Electronic Engineering, Xidian University, China)
Heng Huang (University of Pittsburgh)

More from the Same Authors