Timezone: »

Efficient Non-greedy Optimization of Decision Trees
Mohammad Norouzi · Maxwell Collins · Matthew A Johnson · David Fleet · Pushmeet Kohli

Wed Dec 09 04:00 PM -- 08:59 PM (PST) @ 210 C #42

Decision trees and randomized forests are widely used in computer vision and machine learning. Standard algorithms for decision tree induction optimize the split functions one node at a time according to some splitting criteria. This greedy procedure often leads to suboptimal trees. In this paper, we present an algorithm for optimizing the split functions at all levels of the tree jointly with the leaf parameters, based on a global objective. We show that the problem of finding optimal linear-combination (oblique) splits for decision trees is related to structured prediction with latent variables, and we formulate a convex-concave upper bound on the tree's empirical loss. Computing the gradient of the proposed surrogate objective with respect to each training exemplar is O(d^2), where d is the tree depth, and thus training deep trees is feasible. The use of stochastic gradient descent for optimization enables effective training with large datasets. Experiments on several classification benchmarks demonstrate that the resulting non-greedy decision trees outperform greedy decision tree baselines.

Author Information

Mohammad Norouzi (University of Toronto)
Maxwell Collins (UW-Madison)
Matthew A Johnson (Microsoft Research)
David Fleet (University of Toronto)
Pushmeet Kohli (Microsoft Research)

More from the Same Authors