The performance of a Variational Autoencoder hinges on how well its approximate posterior, qϕ(z∣x), matches the true posterior, pθ(z∣x). While the Evidence Lower Bound (ELBO) provides a tractable objective, the typical reliance on simple, amortized inference networks for qϕ(z∣x) can limit model expressiveness and the tightness of this bound. This chapter focuses on methods to improve the inference component of VAEs.
We will examine amortized variational inference in detail, discussing its practical advantages and inherent assumptions, such as the mean-field approximation. You will learn about techniques designed to overcome these limitations, including structured variational inference for more flexible posterior forms and Importance Weighted Autoencoders (IWAEs) for achieving tighter ELBO estimates. We will also cover other approaches like semi-amortized inference, the use of auxiliary variables, and Adversarial Variational Bayes (AVB) to further enhance the fidelity of the posterior approximation. By the end of this chapter, you'll understand how to select and apply these advanced inference strategies to build VAEs with improved performance characteristics.
4.1 Amortized Variational Inference: Strengths and Weaknesses
4.2 Limitations of Mean-Field Approximations
4.3 Structured Variational Inference in VAEs
4.4 Importance Weighted Autoencoders (IWAEs)
4.5 Auxiliary Variables and Semi-Amortized Variational Inference
4.6 Variational Inference with Implicit Models
4.7 Adversarial Variational Bayes (AVB)
4.8 Practice: Implementing IWAEs and Advanced Inference
© 2025 ApX Machine Learning