Generative models learn to synthesize new data samples that resemble a given dataset. This chapter provides the necessary background before focusing specifically on diffusion models.
We will start by briefly reviewing common types of generative models, like Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs), to understand their goals and operational mechanisms. We will then discuss the challenges associated with these models, highlighting the reasons that motivated the development of diffusion techniques.
The central idea behind diffusion models involves two processes: systematically adding noise to data until it becomes pure noise, and then learning to reverse this process to generate data starting from noise. We will introduce this core concept. Finally, we'll establish the high-level probabilistic framework used to describe these models, preparing you for the mathematical details in subsequent chapters.
Upon completing this chapter, you will understand the context in which diffusion models operate and grasp their basic operational principle.
1.1 Overview of Generative Models
1.2 Motivation for Diffusion Models
1.3 The Core Idea: Noise and Denoise
1.4 Probabilistic Framework Introduction
© 2025 ApX Machine Learning