`

Timezone: »

 
Poster
Improved Transformer for High-Resolution GANs
Long Zhao · Zizhao Zhang · Ting Chen · Dimitris Metaxas · Han Zhang

Tue Dec 07 04:30 PM -- 06:00 PM (PST) @ Virtual #None
Attention-based models, exemplified by the Transformer, can effectively model long range dependency, but suffer from the quadratic complexity of self-attention operation, making them difficult to be adopted for high-resolution image generation based on Generative Adversarial Networks (GANs). In this paper, we introduce two key ingredients to Transformer to address this challenge. First, in low-resolution stages of the generative process, standard global self-attention is replaced with the proposed multi-axis blocked self-attention which allows efficient mixing of local and global attention. Second, in high-resolution stages, we drop self-attention while only keeping multi-layer perceptrons reminiscent of the implicit neural function. To further improve the performance, we introduce an additional self-modulation component based on cross-attention. The resulting model, denoted as HiT, has a nearly linear computational complexity with respect to the image size and thus directly scales to synthesizing high definition images. We show in the experiments that the proposed HiT achieves state-of-the-art FID scores of 31.87 and 2.95 on unconditional ImageNet $128 \times 128$ and FFHQ $256 \times 256$, respectively, with a reasonable throughput. We believe the proposed HiT is an important milestone for generators in GANs which are completely free of convolutions.

Author Information

Long Zhao (Rutgers University)
Zizhao Zhang (Google)
Ting Chen (Google Brain)
Dimitris Metaxas (Rutgers University)
Han Zhang (Google)

More from the Same Authors