Poster
Boosting with Tempered Exponential Measures
Richard Nock · Ehsan Amid · Manfred Warmuth
Great Hall & Hall B1+B2 (level 1) #1107
Abstract:
One of the most popular ML algorithms, AdaBoost, can bederived from the dual of a relative entropyminimization problem subject to the fact that the positive weightson the examples sum to one. Essentially, harder examples receive higher probabilities. We generalize this setup to the recently introduced *temperedexponential measure*s (TEMs) where normalization is enforced on a specific power of the measure and not the measure itself.TEMs are indexed by a parameter and generalize exponential families (). Our algorithm, -AdaBoost, recovers AdaBoost as a special case (). We show that -AdaBoost retains AdaBoost's celebrated exponential convergence rate when while allowing a slight improvement of the rate's hidden constant compared to . -AdaBoost partially computes on a generalization of classical arithmetic over the reals and brings notable properties like guaranteed bounded leveraging coefficients for . From the loss that -AdaBoost minimizes (a generalization of the exponential loss), we show how to derive a new family of *tempered* losses for the induction of domain-partitioning classifiers like decision trees. Crucially, strict properness is ensured for all while their boosting rates span the full known spectrum. Experiments using -AdaBoost+trees display that significant leverage can be achieved by tuning .
Chat is not available.