Timezone: »

Reconciling "priors'' & "priors" without prejudice?
Remi Gribonval · Pierre Machart

Fri Dec 06 11:52 AM -- 11:56 AM (PST) @ Harvey's Convention Center Floor, CC

There are two major routes to address linear inverse problems. Whereas regularization-based approaches build estimators as solutions of penalized regression optimization problems, Bayesian estimators rely on the posterior distribution of the unknown, given some assumed family of priors. While these may seem radically different approaches, recent results have shown that, in the context of additive white Gaussian denoising, the Bayesian conditional mean estimator is always the solution of a penalized regression problem. The contribution of this paper is twofold. First, we extend the additive white Gaussian denoising results to general linear inverse problems with colored Gaussian noise. Second, we characterize conditions under which the penalty function associated to the conditional mean estimator can satisfy certain popular properties such as convexity, separability, and smoothness. This sheds light on some tradeoff between computational efficiency and estimation accuracy in sparse regularization, and draws some connections between Bayesian estimation and proximal optimization.

Author Information

Remi Gribonval (Inria & ENS de Lyon)
Pierre Machart (INRIA)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors