Skip to yearly menu bar Skip to main content


Poster

Boosting with Tempered Exponential Measures

Richard Nock · Ehsan Amid · Manfred Warmuth

Great Hall & Hall B1+B2 (level 1) #1107

Abstract: One of the most popular ML algorithms, AdaBoost, can bederived from the dual of a relative entropyminimization problem subject to the fact that the positive weightson the examples sum to one. Essentially, harder examples receive higher probabilities. We generalize this setup to the recently introduced *temperedexponential measure*s (TEMs) where normalization is enforced on a specific power of the measure and not the measure itself.TEMs are indexed by a parameter t and generalize exponential families (t=1). Our algorithm, t-AdaBoost, recovers AdaBoost as a special case (t=1). We show that t-AdaBoost retains AdaBoost's celebrated exponential convergence rate when t[0,1) while allowing a slight improvement of the rate's hidden constant compared to t=1. t-AdaBoost partially computes on a generalization of classical arithmetic over the reals and brings notable properties like guaranteed bounded leveraging coefficients for t[0,1). From the loss that t-AdaBoost minimizes (a generalization of the exponential loss), we show how to derive a new family of *tempered* losses for the induction of domain-partitioning classifiers like decision trees. Crucially, strict properness is ensured for all while their boosting rates span the full known spectrum. Experiments using t-AdaBoost+trees display that significant leverage can be achieved by tuning t.

Chat is not available.