Timezone: »
Conditional Generative Adversarial Networks (cGAN) generate realistic images by incorporating class information into GAN. While one of the most popular cGANs is an auxiliary classifier GAN with softmax cross-entropy loss (ACGAN), it is widely known that training ACGAN is challenging as the number of classes in the dataset increases. ACGAN also tends to generate easily classifiable samples with a lack of diversity. In this paper, we introduce two cures for ACGAN. First, we identify that gradient exploding in the classifier can cause an undesirable collapse in early training, and projecting input vectors onto a unit hypersphere can resolve the problem. Second, we propose the Data-to-Data Cross-Entropy loss (D2D-CE) to exploit relational information in the class-labeled dataset. On this foundation, we propose the Rebooted Auxiliary Classifier Generative Adversarial Network (ReACGAN). The experimental results show that ReACGAN achieves state-of-the-art generation results on CIFAR10, Tiny-ImageNet, CUB200, and ImageNet datasets. We also verify that ReACGAN benefits from differentiable augmentations and that D2D-CE harmonizes with StyleGAN2 architecture. Model weights and a software package that provides implementations of representative cGANs and all experiments in our paper are available at https://github.com/POSTECH-CVLab/PyTorch-StudioGAN.
Author Information
Minguk Kang (POSTECH)
Woohyeon Shim (Postech)
Minsu Cho (POSTECH)
Jaesik Park (POSTECH)
More from the Same Authors
-
2020 : Combinatorial 3D Shape Generation via Sequential Assembly »
Jungtaek Kim · Hyunsoo Chung · Jinhwi Lee · Minsu Cho · Jaesik Park -
2022 : Substructure-Atom Cross Attention for Molecular Representation Learning »
Jiye Kim · Seungbeom Lee · Dongwoo Kim · Sungsoo Ahn · Jaesik Park -
2022 : SeLCA: Self-Supervised Learning of Canonical Axis »
Seungwook Kim · Yoonwoo Jeong · Chunghyun Park · Jaesik Park · Minsu Cho -
2023 Poster: Activity Grammars for Temporal Action Segmentation »
Dayoung Kong · Joonseok Lee · Deunsol Jung · Suha Kwak · Minsu Cho -
2023 Poster: Locality-Aware Generalizable Implicit Neural Representation »
Doyup Lee · Chiheon Kim · Minsu Cho · WOOK SHIN HAN -
2023 Poster: Holistic Evaluation of Text-to-Image Models »
Tony Lee · Michihiro Yasunaga · Chenlin Meng · Yifan Mai · Joon Sung Park · Agrim Gupta · Yunzhi Zhang · Deepak Narayanan · Hannah Teufel · Marco Bellagente · Minguk Kang · Taesung Park · Jure Leskovec · Jun-Yan Zhu · Fei-Fei Li · Jiajun Wu · Stefano Ermon · Percy Liang -
2022 Poster: Learning Debiased Classifier with Biased Committee »
Nayeong Kim · Sehyun Hwang · Sungsoo Ahn · Jaesik Park · Suha Kwak -
2022 Poster: PeRFception: Perception using Radiance Fields »
Yoonwoo Jeong · Seungjoo Shin · Junha Lee · Chris Choy · Anima Anandkumar · Minsu Cho · Jaesik Park -
2022 Poster: Peripheral Vision Transformer »
Juhong Min · Yucheng Zhao · Chong Luo · Minsu Cho -
2022 Poster: A Rotated Hyperbolic Wrapped Normal Distribution for Hierarchical Representation Learning »
Seunghyuk Cho · Juyong Lee · Jaesik Park · Dongwoo Kim -
2022 Poster: Draft-and-Revise: Effective Image Generation with Contextual RQ-Transformer »
Doyup Lee · Chiheon Kim · Saehoon Kim · Minsu Cho · WOOK SHIN HAN -
2021 Poster: Brick-by-Brick: Combinatorial Construction with Deep Reinforcement Learning »
Hyunsoo Chung · Jungtaek Kim · Boris Knyazev · Jinhwi Lee · Graham Taylor · Jaesik Park · Minsu Cho -
2021 Poster: Relational Self-Attention: What's Missing in Attention for Video Understanding »
Manjin Kim · Heeseung Kwon · CHUNYU WANG · Suha Kwak · Minsu Cho -
2020 Poster: CircleGAN: Generative Adversarial Learning across Spherical Circles »
Woohyeon Shim · Minsu Cho -
2020 Poster: ContraGAN: Contrastive Learning for Conditional Image Generation »
Minguk Kang · Jaesik Park -
2019 Poster: Mining GOLD Samples for Conditional GANs »
Sangwoo Mo · Chiheon Kim · Sungwoong Kim · Minsu Cho · Jinwoo Shin