Previous chapters introduced autoencoders primarily for dimensionality reduction and learning efficient data codings. While effective for reconstruction, standard autoencoders and their regularized variants often produce latent spaces not well-suited for generating new, coherent data samples. Generating data requires a more structured latent space, typically one grounded in probability.
This chapter focuses on Variational Autoencoders (VAEs), a type of autoencoder specifically developed for generative modeling. We will shift from deterministic encoders and decoders to a probabilistic framework. You will learn how VAEs treat latent variables as probability distributions rather than fixed vectors. We'll cover the essential concepts underpinning VAEs, including:
By understanding these components, you will grasp how VAEs learn a meaningful latent space from which new data points, similar to the training data, can be effectively sampled and generated. We will also touch upon extensions like Conditional VAEs (CVAEs) and implement a basic VAE for a practical image generation task.
© 2025 ApX Machine Learning