Skip to yearly menu bar Skip to main content


When Do Neural Nets Outperform Boosted Trees on Tabular Data?

Duncan McElfresh · Sujay Khandagale · Jonathan Valverde · Vishak Prasad C · Ganesh Ramakrishnan · Micah Goldblum · Colin White

Great Hall & Hall B1+B2 (level 1) #2021
[ ] [ Livestream: Visit Poster Session 6 ]
Thu 14 Dec 3 p.m. — 5 p.m. PST
[ Paper [ Poster [ OpenReview


Tabular data is one of the most commonly used types of data in machine learning. Despite recent advances in neural nets (NNs) for tabular data, there is still an active discussion on whether or not NNs generally outperform gradient-boosted decision trees (GBDTs) on tabular data, with several recent works arguing either that GBDTs consistently outperform NNs on tabular data, or vice versa. In this work, we take a step back and question the importance of this debate. To this end, we conduct the largest tabular data analysis to date, comparing 19 algorithms across 176 datasets, and we find that the 'NN vs. GBDT' debate is overemphasized: for a surprisingly high number of datasets, either the performance difference between GBDTs and NNs is negligible, or light hyperparameter tuning on a GBDT is more important than choosing between NNs and GBDTs. Next, we analyze dozens of metafeatures to determine what \emph{properties} of a dataset make NNs or GBDTs better-suited to perform well. For example, we find that GBDTs are much better than NNs at handling skewed or heavy-tailed feature distributions and other forms of dataset irregularities. Our insights act as a guide for practitioners to determine which techniques may work best on their dataset. Finally, with the goal of accelerating tabular data research, we release the TabZilla Benchmark Suite: a collection of the 36 'hardest' of the datasets we study. Our benchmark suite, codebase, and all raw results are available at

Chat is not available.