Timezone: »
The key objective of Generative Adversarial Networks (GANs) is to generate new data with the same statistics as the provided training data. However, multiple recent works show that state-of-the-art architectures yet struggle to achieve this goal. In particular, they report an elevated amount of high frequencies in the spectral statistics which makes it straightforward to distinguish real and generated images. Explanations for this phenomenon are controversial: While most works attribute the artifacts to the generator, other works point to the discriminator. We take a sober look at those explanations and provide insights on what makes proposed measures against high-frequency artifacts effective. To achieve this, we first independently assess the architectures of both the generator and discriminator and investigate if they exhibit a frequency bias that makes learning the distribution of high-frequency content particularly problematic. Based on these experiments, we make the following four observations: 1) Different upsampling operations bias the generator towards different spectral properties. 2) Checkerboard artifacts introduced by upsampling cannot explain the spectral discrepancies alone as the generator is able to compensate for these artifacts. 3) The discriminator does not struggle with detecting high frequencies per se but rather struggles with frequencies of low magnitude. 4) The downsampling operations in the discriminator can impair the quality of the training signal it provides.In light of these findings, we analyze proposed measures against high-frequency artifacts in state-of-the-art GAN training but find that none of the existing approaches can fully resolve spectral artifacts yet. Our results suggest that there is great potential in improving the discriminator and that this could be key to match the distribution of the training data more closely.
Author Information
Katja Schwarz (Tuebingen University)
Yiyi Liao (University of Tübingen)
Andreas Geiger (MPI Tübingen)
More from the Same Authors
-
2021 : STEP: Segmenting and Tracking Every Pixel »
Mark Weber · Jun Xie · Maxwell Collins · Yukun Zhu · Paul Voigtlaender · Hartwig Adam · Bradley Green · Andreas Geiger · Bastian Leibe · Daniel Cremers · Aljosa Osep · Laura Leal-Taixé · Liang-Chieh Chen -
2022 Poster: VoxGRAF: Fast 3D-Aware Image Synthesis with Sparse Voxel Grids »
Katja Schwarz · Axel Sauer · Michael Niemeyer · Yiyi Liao · Andreas Geiger -
2021 Oral: Shape As Points: A Differentiable Poisson Solver »
Songyou Peng · Chiyu Jiang · Yiyi Liao · Michael Niemeyer · Marc Pollefeys · Andreas Geiger -
2021 Poster: ATISS: Autoregressive Transformers for Indoor Scene Synthesis »
Despoina Paschalidou · Amlan Kar · Maria Shugrina · Karsten Kreis · Andreas Geiger · Sanja Fidler -
2021 Poster: Shape As Points: A Differentiable Poisson Solver »
Songyou Peng · Chiyu Jiang · Yiyi Liao · Michael Niemeyer · Marc Pollefeys · Andreas Geiger -
2021 Poster: Projected GANs Converge Faster »
Axel Sauer · Kashyap Chitta · Jens Müller · Andreas Geiger -
2021 Poster: MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images »
Shaofei Wang · Marko Mihajlovic · Qianli Ma · Andreas Geiger · Siyu Tang -
2020 Poster: GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis »
Katja Schwarz · Yiyi Liao · Michael Niemeyer · Andreas Geiger -
2017 Poster: The Numerics of GANs »
Lars Mescheder · Sebastian Nowozin · Andreas Geiger -
2017 Spotlight: The Numerics of GANs »
Lars Mescheder · Sebastian Nowozin · Andreas Geiger