Auto-Encoding Variational Bayes, Diederik P Kingma, Max Welling, 2013arXivDOI: 10.48550/arXiv.1312.6114 - This foundational paper introduces the Variational Autoencoder (VAE) framework, laying the groundwork for generative modeling with latent variables and the use of the mean-field approximation.
Variational Inference with Normalizing Flows, Danilo Jimenez Rezende and Shakir Mohamed, 2015International Conference on Machine Learning (ICML)DOI: 10.48550/arXiv.1505.05770 - Introduces Normalizing Flows as a method to construct more expressive and flexible approximate posterior distributions in variational inference, enhancing the model's ability to capture complex dependencies.
Improving Variational Inference with Inverse Autoregressive Flow, Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling, 2016Advances in Neural Information Processing Systems (NIPS), Vol. 29DOI: 10.48550/arXiv.1606.04934 - Presents Inverse Autoregressive Flow (IAF), a specific type of normalizing flow that enables the construction of powerful autoregressive approximate posteriors with efficient parallel sampling during inference.
Masked Autoregressive Flow for Density Estimation, George Papamakarios, Theo Pavlakou, Iain Murray, 2017Advances in Neural Information Processing Systems (NIPS), Vol. 30DOI: 10.48550/arXiv.1705.07057 - Introduces Masked Autoregressive Flow (MAF), an autoregressive normalizing flow architecture widely used for high-quality density estimation and expressive posterior modeling in VAEs.