Poster

Hyperparameter Optimization Is Deceiving Us, and How to Stop It

A. Feder Cooper · Yucheng Lu · Jessica Forde · Christopher De Sa

Keywords: [ Optimization ] [ Machine Learning ]

[ Abstract ]
Thu 9 Dec 8:30 a.m. PST — 10 a.m. PST

Abstract:

Recent empirical work shows that inconsistent results based on choice of hyperparameter optimization (HPO) configuration are a widespread problem in ML research. When comparing two algorithms J and K searching one subspace can yield the conclusion that J outperforms K, whereas searching another can entail the opposite. In short, the way we choose hyperparameters can deceive us. We provide a theoretical complement to this prior work, arguing that, to avoid such deception, the process of drawing conclusions from HPO should be made more rigorous. We call this process epistemic hyperparameter optimization (EHPO), and put forth a logical framework to capture its semantics and how it can lead to inconsistent conclusions about performance. Our framework enables us to prove EHPO methods that are guaranteed to be defended against deception, given bounded compute time budget t. We demonstrate our framework's utility by proving and empirically validating a defended variant of random search.

Chat is not available.