Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Your Model is Wrong: Robustness and misspecification in probabilistic modeling

Make cross-validation Bayes again

Yuling Yao · Aki Vehtari


Abstract:

There are two orthogonal paradigms for hyperparameter inference: either to make a joint estimation in a larger hierarchical Bayesian model or to optimize the tuning parameter with respect to cross-validation metrics. Both are limited: the “full Bayes” strategy is conceptually unjustified in misspecified models, and may severely under- or over-fit observations; The cross-validation strategy, besides its computation cost, typically results in a point estimate, ignoring the uncertainty in hyperparameters. To bridge the two extremes, we present a general paradigm: a full-Bayes model on top of the cross-validated log likelihood. This prediction-aware approach incorporates additional regularization during hyperparameter tuning, and facilities Bayesian workflow in many otherwise black-box learning algorithms. We develop theory justification and discuss its application in a model averaging example.

Chat is not available.