Timezone: »

 
Workshop
Solving inverse problems with deep networks: New architectures, theoretical foundations, and applications
Reinhard Heckel · Paul Hand · Richard Baraniuk · Joan Bruna · Alexandros Dimakis · Deanna Needell

Fri Dec 13 08:00 AM -- 06:00 PM (PST) @ West 301 - 305
Event URL: https://deep-inverse.org »

There is a long history of algorithmic development for solving inverse problems arising in sensing and imaging systems and beyond. Examples include medical and computational imaging, compressive sensing, as well as community detection in networks. Until recently, most algorithms for solving inverse problems in the imaging and network sciences were based on static signal models derived from physics or intuition, such as wavelets or sparse representations.

Today, the best performing approaches for the aforementioned image reconstruction and sensing problems are based on deep learning, which learn various elements of the method including i) signal representations, ii) stepsizes and parameters of iterative algorithms, iii) regularizers, and iv) entire inverse functions. For example, it has recently been shown that solving a variety of inverse problems by transforming an iterative, physics-based algorithm into a deep network whose parameters can be learned from training data, offers faster convergence and/or a better quality solution. Moreover, even with very little or no learning, deep neural networks enable superior performance for classical linear inverse problems such as denoising and compressive sensing. Motivated by those success stories, researchers are redesigning traditional imaging and sensing systems.

However, the field is mostly wide open with a range of theoretical and practical questions unanswered. In particular, deep-neural network based approaches often lack the guarantees of the traditional physics based methods, and while typically superior can make drastic reconstruction errors, such as fantasizing a tumor in an MRI reconstruction.

This workshop aims at bringing together theoreticians and practitioners in order to chart out recent advances and discuss new directions in deep neural network based approaches for solving inverse problems in the imaging and network sciences.

Fri 8:30 a.m. - 8:35 a.m. [iCal]
Opening Remarks
Reinhard Heckel, Paul Hand, Alex Dimakis, Joan Bruna, Deanna Needell, Richard Baraniuk
Fri 8:40 a.m. - 9:10 a.m. [iCal]

Using a low-dimensional parametrization of signals is a generic and powerful way to enhance performance in signal processing and statistical inference. A very popular and widely explored type of dimensionality reduction is sparsity; another type is generative modelling of signal distributions. Generative models based on neural networks, such as GANs or variational auto-encoders, are particularly performant and are gaining on applicability. In this paper we study spiked matrix models, where a low-rank matrix is observed through a noisy channel. This problem with sparse structure of the spikes has attracted broad attention in the past literature. Here, we replace the sparsity assumption by generative modelling, and investigate the consequences on statistical and algorithmic properties. We analyze the Bayes-optimal performance under specific generative models for the spike. In contrast with the sparsity assumption, we do not observe regions of parameters where statistical performance is superior to the best known algorithmic performance. We show that in the analyzed cases the approximate message passing algorithm is able to reach optimal performance. We also design enhanced spectral algorithms and analyze their performance and thresholds using random matrix theory, showing their superiority to the classical principal component analysis. We complement our theoretical results by illustrating the performance of the spectral algorithms when the spikes come from real datasets.

Lenka Zdeborová
Fri 9:10 a.m. - 9:40 a.m. [iCal]
We study the robust one-bit compressed sensing problem whose goal is to design an algorithm that faithfully recovers any sparse target vector $\theta_0\in\mathbb{R}^d$ \emph{uniformly} from $m$ quantized noisy measurements. Under the assumption that the measurements are sub-Gaussian, to recover any $k$-sparse $\theta_0$ ($k\ll d$) \emph{uniformly} up to an error $\varepsilon$ with high probability, the best known computationally tractable algorithm requires\footnote{Here, an algorithm is ``computationally tractable'' if it has provable convergence guarantees. The notation $\tilde{\mathcal{O}}(\cdot)$ omits a logarithm factor of $\varepsilon^{-1}$.} $m\geq\tilde{\mathcal{O}}(k\log d/\varepsilon^4)$. In this paper, we consider a new framework for the one-bit sensing problem where the sparsity is implicitly enforced via mapping a low dimensional representation $x_0$ through a known $n$-layer ReLU generative network $G:\mathbb{R}^k\rightarrow\mathbb{R}^d$. Such a framework poses low-dimensional priors on $\theta_0$ without a known basis. We propose to recover the target $G(x_0)$ via an unconstrained empirical risk minimization (ERM) problem under a much weaker \emph{sub-exponential measurement assumption}. For such a problem, we establish a joint statistical and computational analysis. In particular, we prove that the ERM estimator in this new framework achieves an improved statistical rate of $m=\tilde{\mathcal{O}} (kn\log d /\epsilon^2)$ recovering any $G(x_0)$ uniformly up to an error $\varepsilon$. Moreover, from the lens of computation, despite non-convexity, we prove that the objective of our ERM problem has no spurious stationary point, that is, any stationary point is equally good for recovering the true target up to scaling with a certain accuracy. Our analysis sheds some light on the possibility of inverting a deep generative model under partial and quantized measurements, complementing the recent success of using deep generative models for inverse problems.
Shuang Qiu, Xiaohan Wei, Zhuoran Yang
Fri 9:40 a.m. - 10:30 a.m. [iCal]
Coffee Break (Break)
Fri 10:30 a.m. - 11:00 a.m. [iCal]

Computational imaging involves the joint design of imaging system hardware and software, optimizing across the entire pipeline from acquisition to reconstruction. Computers can replace bulky and expensive optics by solving computational inverse problems. This talk will describe new microscopes that use computational imaging to enable 3D fluorescence and phase measurement using image reconstruction algorithms that are based on large-scale nonlinear non-convex optimization combined with unrolled neural networks. We further discuss engineering of data capture for computational microscopes by end-to-end learned design.

Laura Waller
Fri 11:00 a.m. - 11:30 a.m. [iCal]
Denoising via Early Stopping (Talk)
Mahdi Soltanolkotabi
Fri 11:30 a.m. - 12:00 p.m. [iCal]

Structural optimization is a popular method for designing objects such as bridge trusses, airplane wings, and optical devices. Unfortunately, the quality of solutions depends heavily on how the problem is parameterized. In this paper, we propose using the implicit bias over functions induced by neural networks to improve the parameterization of structural optimization. Rather than directly optimizing densities on a grid, we instead optimize the parameters of a neural network which outputs those densities. This reparameterization leads to different and often better solutions. On a selection of 116 structural optimization tasks, our approach produces an optimal design 50% more often than the best baseline method.

Stephan Hoyer, Jascha Sohl-Dickstein, Sam Greydanus
Fri 12:00 p.m. - 2:00 p.m. [iCal]
Lunch Break (Break)
Fri 2:00 p.m. - 2:30 p.m. [iCal]
Learning-Based Low-Rank Approximations (Talk)
Piotr Indyk
Fri 2:30 p.m. - 3:00 p.m. [iCal]

We will discuss a self-supervised approach to the foundational inverse problem of denoising (Noise2Self). By taking advantage of statistical independence in the noise, we can estimate the mean-square error for a large class of deep architectures without access to ground truth. This allows us to train a neural network to denoise from noisy data alone, and also to compare between architectures, selecting one which will produce images with the lowest MSE. However, architectures with the same MSE performance can produce qualitatively different results, i.e., the hypersurface of images with fixed MSE is very heterogeneous. We will discuss ongoing work in understanding the types of artifacts which different denoising architectures give rise to.

Joshua Batson
Fri 3:00 p.m. - 3:30 p.m. [iCal]

Regularization techniques are widely employed in the solution of inverse problems in data analysis and scientific computing due to their effectiveness in addressing difficulties due to ill-posedness. In their most common manifestation, these methods take the form of penalty functions added to the objective in variational approaches for solving inverse problems. The purpose of the penalty function is to induce a desired structure in the solution, and these functions are specified based on prior domain-specific expertise. We consider the problem of learning suitable regularization functions from data in settings in which precise domain knowledge is not directly available; the objective is to identify a regularizer to promote the type of structure contained in the data. The regularizers obtained using our framework are specified as convex functions that can be computed efficiently via semidefinite programming. Our approach for learning such semidefinite regularizers combines recent techniques for rank minimization problems along with the Operator Sinkhorn procedure. (Joint work with Yong Sheng Soh)

Venkat Chandrasekaran
Fri 4:15 p.m. - 6:00 p.m. [iCal]
Poster Session
Jonathan Scarlett, Piotr Indyk, Ali Vakilian, Adrian Weller, Partha Mitra, Benjamin Aubin, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová, Kristina Monakhova, Joshua Yurtsever, Laura Waller, Hendrik Sommerhoff, Michael Moeller, Rushil Anirudh, Shuang Qiu, Xiaohan Wei, Zhuoran Yang, Jayaraman J. Thiagarajan, Salman Asif, Michael Gillhofer, Johannes Brandstetter, Sepp Hochreiter, Felix Petersen, Dhruv Patel, Assad Oberai, Akshay Kamath, Sushrut Karmalkar, Eric Price, Ali Ahmed, Zahra Kadkhodaie, Sreyas Mohan, Eero Simoncelli, Carlos Fernandez-Granda, Oscar Leong, Wesam Sakla, Rebecca Willett, Stephan Hoyer, Jascha Sohl-Dickstein, Sam Greydanus, Gauri Jagatap, Chinmay Hegde, Michael Kellman, Jon Tamir, Numan Laanait, Ousmane Dia, Mirco Ravanelli, Jonathan Binas, Negar Rostamzadeh, Shirin Jalali, Tiantian Fang, Alex Schwing, Sébastien Lachapelle, Philippe Brouillard, Tristan Deleu, Simon Lacoste-Julien, Stella Yu, Arya Mazumdar, Ankit Singh Rawat, Yue Zhao, Jianshu Chen, Rebecca Li, Hubert Ramsauer, Gabrio Rizzuti, Nikolaos Mitsakos, Dingzhou Cao, Thomas Strohmer, Yang Li, Pei Peng, Greg Ongie

Author Information

Reinhard Heckel (TUM)
Paul Hand (Northeastern University)
Richard Baraniuk (Rice University)
Joan Bruna (NYU)
Alex Dimakis (University of Texas, Austin)
Deanna Needell (UCLA)

More from the Same Authors