Timezone: »

Invertibility of Convolutional Generative Networks from Partial Measurements
Fangchang Ma · Ulas Ayaz · Sertac Karaman

Tue Dec 04 07:45 AM -- 09:45 AM (PST) @ Room 210 #10

In this work, we present new theoretical results on convolutional generative neural networks, in particular their invertibility (i.e., the recovery of input latent code given the network output). The study of network inversion problem is motivated by image inpainting and the mode collapse problem in training GAN. Network inversion is highly non-convex, and thus is typically computationally intractable and without optimality guarantees. However, we rigorously prove that, under some mild technical assumptions, the input of a two-layer convolutional generative network can be deduced from the network output efficiently using simple gradient descent. This new theoretical finding implies that the mapping from the low- dimensional latent space to the high-dimensional image space is bijective (i.e., one-to-one). In addition, the same conclusion holds even when the network output is only partially observed (i.e., with missing pixels). Our theorems hold for 2-layer convolutional generative network with ReLU as the activation function, but we demonstrate empirically that the same conclusion extends to multi-layer networks and networks with other activation functions, including the leaky ReLU, sigmoid and tanh.

Author Information

Fangchang Ma (MIT)

Fangchang Ma is a research scientist at Apple AI/ML. He got his PhD from Massachusetts Institute of Technology in 2019. His interests involve computer vision, machine learning, AR/VR, and robotics.

Ulas Ayaz (Massachusetts Institute of Technology / Lyft)
Sertac Karaman (MIT)

More from the Same Authors