Timezone: »

 
Poster
Direct 0-1 Loss Minimization and Margin Maximization with Boosting
Shaodan Zhai · Tian Xia · Ming Tan · Shaojun Wang

Sun Dec 08 02:00 PM -- 06:00 PM (PST) @ Harrah's Special Events Center, 2nd Floor

We propose a boosting method, DirectBoost, a greedy coordinate descent algorithm that builds an ensemble classifier of weak classifiers through directly minimizing empirical classification error over labeled training examples; once the training classification error is reduced to a local coordinatewise minimum, DirectBoost runs a greedy coordinate ascent algorithm that continuously adds weak classifiers to maximize any targeted arbitrarily defined margins until reaching a local coordinatewise maximum of the margins in a certain sense. Experimental results on a collection of machine-learning benchmark datasets show that DirectBoost gives consistently better results than AdaBoost, LogitBoost, LPBoost with column generation and BrownBoost, and is noise tolerant when it maximizes an n'th order bottom sample margin.

Author Information

Shaodan Zhai (Wright State University)
Tian Xia (Wright State University)
Ming Tan (Wright State University)
Shaojun Wang (Wright State University)

More from the Same Authors