While diffusion models achieve state-of-the-art results in generation tasks, their iterative sampling process, often requiring hundreds or thousands of steps, presents a significant computational bottleneck. This chapter introduces Consistency Models, a recent development aimed at drastically reducing the number of steps needed for generation, potentially down to a single step.
You will study the underlying principle of consistency models, which enforces that points belonging to the same probability flow ODE trajectory map to the same initial point x0. We will examine how this property allows for direct generation. We will cover the two main approaches for training these models: consistency distillation, which transfers knowledge from a pre-trained diffusion model, and consistency training, a standalone method. You will learn how to implement sampling from consistency models using one or few steps, discuss architectural considerations, and understand the critical balance between inference speed and the resulting sample quality.
5.1 Motivation: The Need for Faster Sampling
5.2 Core Idea: Consistency Property
5.3 Consistency Model Training: Distillation Approach
5.4 Consistency Model Training: Standalone Approach
5.5 Sampling from Consistency Models (Single-step and Multi-step)
5.6 Architecture Considerations for Consistency Models
5.7 Trade-offs: Speed vs. Quality
5.8 Hands-on Practical: Basic Consistency Distillation
© 2025 ApX Machine Learning