Improved Training of Wasserstein GANs, Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron Courville, 2017Advances in Neural Information Processing Systems 30DOI: 10.55989/NIPS-2017-1049 - Introduces Wasserstein GAN with Gradient Penalty (WGAN-GP), a robust method for stabilizing GAN training and mitigating mode collapse.
GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium, Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Sepp Hochreiter, 2017Advances in Neural Information Processing Systems 30, Vol. 30 (Curran Associates, Inc.)DOI: 10.48550/arXiv.1706.08500 - Proposes the Two Time-Scale Update Rule (TTUR), demonstrating that using different learning rates for the generator and discriminator can stabilize GAN training.
Hyperparameter Optimization: A Review of Algorithms and Applications, Lars Hillebrand, Marius Brehler, Philipp R. Schauer, Steffen W. W. Meyer, Christian Bockermann, 2021Applied Sciences, Vol. 11DOI: 10.3390/app11146312 - A comprehensive review of various hyperparameter optimization algorithms, including random search and Bayesian optimization, relevant for systematic GAN tuning.