In the previous chapter, we established the mathematical rules for mapping random variables through invertible functions using the change of variables theorem and Jacobian determinants. Now we will apply these mathematical principles to construct practical generative models.
A single transformation is rarely expressive enough to model a highly complex probability distribution. To build a highly flexible model, we stack multiple simple invertible transformations sequentially. If we start with a sample from a simple base distribution, we can apply a series of transformations to produce a final sample :
This composite function forms the basis of a normalizing flow. By defining an appropriate base distribution, such as an isotropic Gaussian, we can push samples through this sequence of transformations to approximate highly complex data distributions.
This chapter focuses on the mechanics of assembling these models. You will learn the formal definition of a normalizing flow and see how to select appropriate base distributions for your tasks. We will examine specific architectures like planar and radial flows, looking closely at their mathematical formulations and how they scale.
Finally, you will move from theory to application. You will write PyTorch code to implement a planar flow layer from scratch and train it to map data from a simple 2D dataset. By the end of this module, you will know how to translate mathematical definitions into functional generative models in Python.
2.1 Defining a Normalizing Flow
2.2 Stacking Transformations
2.3 Planar and Radial Flows
2.4 Choosing Base Distributions
2.5 Implementing Planar Flows Practice