Timezone: »

 
Poster
SnapBoost: A Heterogeneous Boosting Machine
Thomas Parnell · Andreea Anghel · Małgorzata Łazuka · Nikolas Ioannou · Sebastian Kurella · Peshal Agarwal · Nikolaos Papandreou · Haralampos Pozidis

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1484

Modern gradient boosting software frameworks, such as XGBoost and LightGBM, implement Newton descent in a functional space. At each boosting iteration, their goal is to find the base hypothesis, selected from some base hypothesis class, that is closest to the Newton descent direction in a Euclidean sense. Typically, the base hypothesis class is fixed to be all binary decision trees up to a given depth. In this work, we study a Heterogeneous Newton Boosting Machine (HNBM) in which the base hypothesis class may vary across boosting iterations. Specifically, at each boosting iteration, the base hypothesis class is chosen, from a fixed set of subclasses, by sampling from a probability distribution. We derive a global linear convergence rate for the HNBM under certain assumptions, and show that it agrees with existing rates for Newton's method when the Newton direction can be perfectly fitted by the base hypothesis at each boosting iteration. We then describe a particular realization of a HNBM, SnapBoost, that, at each boosting iteration, randomly selects between either a decision tree of variable depth or a linear regressor with random Fourier features. We describe how SnapBoost is implemented, with a focus on the training complexity. Finally, we present experimental results, using OpenML and Kaggle datasets, that show that SnapBoost is able to achieve better generalization loss than competing boosting frameworks, without taking significantly longer to tune.

Author Information

Thomas Parnell (IBM Research)
Andreea Anghel (IBM Research)
Małgorzata Łazuka (ETH Zürich)
Nikolas Ioannou (IBM Research)
Sebastian Kurella (ETH Zürich)
Peshal Agarwal (ETH Zürich)
Nikolaos Papandreou (IBM Research Zurich)
Haralampos Pozidis (IBM Research)

More from the Same Authors

  • 2019 : Posters and Coffee »
    Sameer Kumar · Tomasz Kornuta · Oleg Bakhteev · Hui Guan · Xiaomeng Dong · Minsik Cho · Sören Laue · Theodoros Vasiloudis · Andreea Anghel · Erik Wijmans · Zeyuan Shang · Oleksii Kuchaiev · Ji Lin · Susan Zhang · Ligeng Zhu · Beidi Chen · Vinu Joseph · Jialin Ding · Jonathan Raiman · Ahnjae Shin · Vithursan Thangarasa · Anush Sankaran · Akhil Mathur · Martino Dazzi · Markus Löning · Darryl Ho · Emanuel Zgraggen · Supun Nakandala · Tomasz Kornuta · Rita Kuznetsova
  • 2019 Poster: SySCD: A System-Aware Parallel Coordinate Descent Algorithm »
    Nikolas Ioannou · Celestine Mendler-Dünner · Thomas Parnell
  • 2019 Spotlight: SySCD: A System-Aware Parallel Coordinate Descent Algorithm »
    Nikolas Ioannou · Celestine Mendler-Dünner · Thomas Parnell
  • 2018 : Posters (all accepted papers) + Break »
    Jianyu Wang · Denis Gudovskiy · Ziheng Jiang · Michael Kaufmann · Andreea Anghel · James Bradbury · Nikolas Ioannou · Nitin Agrawal · Emma Tosch · Gyeongin Yu · Keno Fischer · Jarrett Revels · Giuseppe Siracusano · Yaoqing Yang · Jeff Johnson · Yang You · Hector Yuen · Chris Ying · Honglei Liu · Nikoli Dryden · Xiangxi Mo · Yangzihao Wang · Amit Juneja · Micah Smith · Qian Yu · pramod gupta · Deepak Narayanan · Keshav Santhanam · Tim Capes · Abdul Dakkak · Norman Mu · Ke Deng · Liam Li · Joao Carreira · Luis Remis · Deepti Raghavan · Una-May O'Reilly · Amanpreet Singh · Mahmoud (Mido) Assran · Eugene Wu · Eytan Bakshy · Jinliang Wei · Michael Innes · Viral Shah · Haibin Lin · Conrad Sanderson · Ryan Curtin · Marcus Edel
  • 2018 Poster: Snap ML: A Hierarchical Framework for Machine Learning »
    Celestine Dünner · Thomas Parnell · Dimitrios Sarigiannis · Nikolas Ioannou · Andreea Anghel · Gummadi Ravi · Madhusudanan Kandasamy · Haralampos Pozidis
  • 2017 Poster: Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems »
    Celestine Dünner · Thomas Parnell · Martin Jaggi