`

Timezone: »

 
HPO-B: A Large-Scale Reproducible Benchmark for Black-Box HPO based on OpenML
Sebastian Pineda Arango · Hadi Jomaa · Martin Wistuba · Josif Grabocka

Hyperparameter optimization (HPO) is a core problem for the machine learning community and remains largely unsolved due to the significant computational resources required to evaluate hyperparameter configurations. As a result, a series of recent related works have focused on the direction of transfer learning for quickly fine-tuning hyperparameters on a dataset. Unfortunately, the community does not have a common large-scale benchmark for comparing HPO algorithms. Instead, the de facto practice consists of empirical protocols on arbitrary small-scale meta-datasets that vary inconsistently across publications, making reproducibility a challenge. To resolve this major bottleneck and enable a fair and fast comparison of black-box HPO methods on a level playing field, we propose HPO-B, a new large-scale benchmark in the form of a collection of meta-datasets. Our benchmark is assembled and preprocessed from the OpenML repository and consists of 176 search spaces (algorithms) evaluated sparsely on 196 datasets with a total of 6.4 million hyperparameter evaluations. For ensuring reproducibility on our benchmark, we detail explicit experimental protocols, splits, and evaluation measures for comparing methods for both non-transfer, as well as, transfer learning HPO.

Author Information

Sebastian Pineda Arango (Albert-Ludwigs-Universität Freiburg)
Hadi Jomaa (University of Hildesheim)
Martin Wistuba (Amazon)
Josif Grabocka (Universität Freiburg)

More from the Same Authors