Timezone: »
Automated hyperparameter optimization (HPO) can support practitioners to obtain peak performance in machine learning models.However, there is often a lack of valuable insights into the effects of different hyperparameters on the final model performance.This lack of explainability makes it difficult to trust and understand the automated HPO process and its results.We suggest using interpretable machine learning (IML) to gain insights from the experimental data obtained during HPO with Bayesian optimization (BO).BO tends to focus on promising regions with potential high-performance configurations and thus induces a sampling bias.Hence, many IML techniques, such as the partial dependence plot (PDP), carry the risk of generating biased interpretations.By leveraging the posterior uncertainty of the BO surrogate model, we introduce a variant of the PDP with estimated confidence bands.We propose to partition the hyperparameter space to obtain more confident and reliable PDPs in relevant sub-regions.In an experimental study, we provide quantitative evidence for the increased quality of the PDPs within sub-regions.
Author Information
Julia Moosbauer (LMU Munich)
Julia Herbinger (Institut für Statistik)
Giuseppe Casalicchio (LMU Munich)
Marius Lindauer (Leibniz University Hannover)
Bernd Bischl (LMU Munich)
More from the Same Authors
-
2021 : OpenML Benchmarking Suites »
Bernd Bischl · Giuseppe Casalicchio · Matthias Feurer · Pieter Gijsbers · Frank Hutter · Michel Lang · Rafael Gomes Mantovani · Jan van Rijn · Joaquin Vanschoren -
2021 : HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO »
Katharina Eggensperger · Philipp Müller · Neeratyoy Mallik · Matthias Feurer · Rene Sass · Aaron Klein · Noor Awad · Marius Lindauer · Frank Hutter -
2021 : Towards modelling hazard factors in unstructured data spaces using gradient-based latent interpolation »
Tobias Weber · Michael Ingrisch · Bernd Bischl · David Rügamer -
2021 : Towards modelling hazard factors in unstructured data spaces using gradient-based latent interpolation »
Tobias Weber · Michael Ingrisch · Bernd Bischl · David Rügamer -
2022 : PI is back! Switching Acquisition Functions in Bayesian Optimization »
Carolin Benjamins · Elena Raponi · Anja Jankovic · Koen van der Blom · Maria Laura Santoni · Marius Lindauer · Carola Doerr -
2022 : Towards Automated Design of Bayesian Optimization via Exploratory Landscape Analysis »
Carolin Benjamins · Anja Jankovic · Elena Raponi · Koen van der Blom · Marius Lindauer · Carola Doerr -
2022 : PriorBand: HyperBand + Human Expert Knowledge »
Neeratyoy Mallik · Carl Hvarfner · Danny Stoll · Maciej Janowski · Edward Bergman · Marius Lindauer · Luigi Nardi · Frank Hutter -
2021 : CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning »
Carolin Benjamins · Theresa Eimer · Frederik Schubert · André Biedenkapp · Bodo Rosenhahn · Frank Hutter · Marius Lindauer -
2021 : Hyperparameters in Contextual RL are Highly Situational »
Theresa Eimer · Carolin Benjamins · Marius Lindauer -
2021 Poster: Well-tuned Simple Nets Excel on Tabular Datasets »
Arlind Kadra · Marius Lindauer · Frank Hutter · Josif Grabocka