Poster
Bayesian Learning via Q-Exponential Process
Shuyi Li · Michael O'Connor · Shiwei Lan
Great Hall & Hall B1+B2 (level 1) #1515
Abstract:
Regularization is one of the most fundamental topics in optimization, statistics and machine learning. To get sparsity in estimating a parameter , an penalty term, , is usually added to the objective function. What is the probabilistic distribution corresponding to such penalty? What is the \emph{correct} stochastic process corresponding to when we model functions ? This is important for statistically modeling high-dimensional objects such as images, with penalty to preserve certainty properties, e.g. edges in the image.In this work, we generalize the -exponential distribution (with density proportional to) to a stochastic process named \emph{-exponential (Q-EP) process} that corresponds to the regularization of functions. The key step is to specify consistent multivariate -exponential distributions by choosing from a large family of elliptic contour distributions. The work is closely related to Besov process which is usually defined in terms of series. Q-EP can be regarded as a definition of Besov process with explicit probabilistic formulation, direct control on the correlation strength, and tractable prediction formula. From the Bayesian perspective, Q-EP provides a flexible prior on functions with sharper penalty () than the commonly used Gaussian process (GP, ).We compare GP, Besov and Q-EP in modeling functional data, reconstructing images and solving inverse problems and demonstrate the advantage of our proposed methodology.
Chat is not available.