Skip to yearly menu bar Skip to main content


Poster

Alternating optimization of decision trees, with application to learning sparse oblique trees

Miguel A. Carreira-Perpinan · Pooya Tavallali

Room 210 #56

Keywords: [ Combinatorial Optimization ] [ Sparsity and Compressed Sensing ] [ Classification ] [ Non-Convex Optimization ]


Abstract:

Learning a decision tree from data is a difficult optimization problem. The most widespread algorithm in practice, dating to the 1980s, is based on a greedy growth of the tree structure by recursively splitting nodes, and possibly pruning back the final tree. The parameters (decision function) of an internal node are approximately estimated by minimizing an impurity measure. We give an algorithm that, given an input tree (its structure and the parameter values at its nodes), produces a new tree with the same or smaller structure but new parameter values that provably lower or leave unchanged the misclassification error. This can be applied to both axis-aligned and oblique trees and our experiments show it consistently outperforms various other algorithms while being highly scalable to large datasets and trees. Further, the same algorithm can handle a sparsity penalty, so it can learn sparse oblique trees, having a structure that is a subset of the original tree and few nonzero parameters. This combines the best of axis-aligned and oblique trees: flexibility to model correlated data, low generalization error, fast inference and interpretable nodes that involve only a few features in their decision.

Live content is unavailable. Log in and register to view live content