GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium, Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Günter Klambauer, Andreas Maier, Sepp Hochreiter, 2017Advances in Neural Information Processing Systems, Vol. 31 (Advances in Neural Information Processing Systems) - Introduces the Two Time-Scale Update Rule (TTUR) to stabilize GAN training through different learning rates for the generator and discriminator, with theoretical analysis supporting its convergence properties.
Generative Adversarial Networks, Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, 2014Advances in Neural Information Processing Systems, Vol. 27 (NeurIPS Foundation) - The foundational paper that introduced Generative Adversarial Networks (GANs) and their min-max game objective, establishing the framework for subsequent GAN research and optimization challenges.
Improved Training of Wasserstein GANs, Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron C. Courville, 2017Advances in Neural Information Processing Systems, Vol. 30 - Presents a significant advancement in GAN training stability by introducing Wasserstein GAN with Gradient Penalty (WGAN-GP), which addresses common GAN training failures through a modified loss function and regularization.