HyperAIHyperAI

Command Palette

Search for a command to run...

The GAN is dead; long live the GAN! A Modern GAN Baseline

Yiwen Huang Aaron Gokaslan Volodymyr Kuleshov James Tompkin

Abstract

There is a widely-spread claim that GANs are difficult to train, and GANarchitectures in the literature are littered with empirical tricks. We provideevidence against this claim and build a modern GAN baseline in a moreprincipled manner. First, we derive a well-behaved regularized relativistic GANloss that addresses issues of mode dropping and non-convergence that werepreviously tackled via a bag of ad-hoc tricks. We analyze our lossmathematically and prove that it admits local convergence guarantees, unlikemost existing relativistic losses. Second, our new loss allows us to discardall ad-hoc tricks and replace outdated backbones used in common GANs withmodern architectures. Using StyleGAN2 as an example, we present a roadmap ofsimplification and modernization that results in a new minimalist baseline --R3GAN. Despite being simple, our approach surpasses StyleGAN2 on FFHQ,ImageNet, CIFAR, and Stacked MNIST datasets, and compares favorably againststate-of-the-art GANs and diffusion models.


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp