Timezone: »

Bridging Explicit and Implicit Deep Generative Models via Neural Stein Estimators
Qitian Wu · Rui Gao · Hongyuan Zha

Wed Dec 08 12:30 AM -- 02:00 AM (PST) @ None #None

There are two types of deep generative models: explicit and implicit. The former defines an explicit density form that allows likelihood inference; while the latter targets a flexible transformation from random noise to generated samples. While the two classes of generative models have shown great power in many applications, both of them, when used alone, suffer from respective limitations and drawbacks. To take full advantages of both models and enable mutual compensation, we propose a novel joint training framework that bridges an explicit (unnormalized) density estimator and an implicit sample generator via Stein discrepancy. We show that our method 1) induces novel mutual regularization via kernel Sobolev norm penalization and Moreau-Yosida regularization, and 2) stabilizes the training dynamics. Empirically, we demonstrate that proposed method can facilitate the density estimator to more accurately identify data modes and guide the generator to output higher-quality samples, comparing with training a single counterpart. The new approach also shows promising results when the training samples are contaminated or limited.

Author Information

Qitian Wu (Shanghai Jiao Tong University)
Rui Gao (University of Texas at Austin)
Hongyuan Zha (Georgia Tech)

More from the Same Authors