Improved Training of Wasserstein GANs, Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron C. Courville, 2017Advances in Neural Information Processing Systems 30, Vol. 30DOI: 10.48550/arXiv.1704.00028 - This paper introduces Wasserstein GAN with Gradient Penalty (WGAN-GP) and widely popularizes the Two Time-Scale Update Rule (TTUR) for stabilizing GAN training, including specific recommendations for discriminator and generator learning rates and Adam optimizer beta parameters.
Which Training Methods for GANs does your Application Need? A Benchmark Study, Simon Lucas, Koki Nonaka, Akira Matsui, 2020IEEE Access, Vol. 8 (Institute of Electrical and Electronics Engineers)DOI: 10.1109/ACCESS.2020.3023402 - This benchmark study empirically evaluates various GAN training methods and stabilization techniques, providing practical insights into the effectiveness of TTUR in different application contexts and its interaction with other methods.