Previous chapters introduced autoencoders primarily as tools for dimensionality reduction and feature learning, where the network learns to reconstruct its input. This chapter presents Variational Autoencoders (VAEs), a distinct class of autoencoders that incorporate a probabilistic perspective. VAEs are designed not just for reconstruction but also to learn a continuous and structured latent space, making them particularly effective for generative modeling – the creation of new data samples.
By working through this chapter, you will gain an understanding of:
A hands-on section will demonstrate how to construct a VAE and visually inspect its latent space to observe its learned structure.
6.1 Introduction to Generative Modeling with Autoencoders
6.2 Principles of Variational Autoencoders
6.3 The VAE Encoder: Outputting Distribution Parameters
6.4 The Reparameterization Trick Explained
6.5 The VAE Decoder: Generating Data from Latent Samples
6.6 The VAE Loss Function: Balancing Reconstruction and Regularization
6.7 Characteristics of the VAE Latent Space
6.8 Using VAE Latent Representations as Features
6.9 Hands-on: Building a VAE and Inspecting Its Latent Space
© 2025 ApX Machine Learning