Generative Adversarial Networks, Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, 2014Advances in Neural Information Processing SystemsDOI: 10.48550/arXiv.1406.2661 - The foundational paper introducing Generative Adversarial Networks and their adversarial training framework, essential for understanding the underlying dynamics of GAN evaluation.
GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium, Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Sepp Hochreiter, 2017Advances in Neural Information Processing Systems, Vol. 30 (Neural Information Processing Systems Foundation, Inc. (NeurIPS))DOI: 10.5555/3295222.3295293 - Introduces the Fréchet Inception Distance (FID), a widely used and important metric for evaluating the quality and diversity of images generated by GANs.
Improved Precision and Recall Metric for Assessing Generative Models, Tuomas Kynkäänniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, Timo Aila, 2019Advances in Neural Information Processing Systems, Vol. 32 (Neural Information Processing Systems Foundation)DOI: 10.48550/arXiv.1904.06991 - Presents an improved method for using Precision and Recall to evaluate generative models, providing specific insights into fidelity and diversity, and diagnosing issues like mode collapse.
Wasserstein GAN, Martin Arjovsky, Soumith Chintala, Léon Bottou, 2017Proceedings of the 34th International Conference on Machine Learning (ICML), Vol. 70DOI: 10.5555/3305890.3305953 - Introduces the Wasserstein GAN, which addresses stability issues and mode collapse in GAN training, providing crucial context for understanding and diagnosing GAN convergence and failure modes.