While the DDPM sampling process reliably generates high-quality samples by carefully reversing the diffusion process step-by-step, it often requires a large number of steps (typically or more). Each step involves a forward pass through the large U-Net model, making generation computationally expensive and slow. If you need to generate many samples or use diffusion models in interactive applications, this latency can be a significant bottleneck.
This motivates the need for faster sampling methods. One of the most influential and widely used is the Denoising Diffusion Implicit Model (DDIM), introduced by Song, Meng, and Ermon in 2020.
DDIM offers a more flexible approach to reversing the diffusion process. Recall that DDPM sampling defines a specific Markovian process: generating strictly depends only on the previous state . DDIM proposes a different, non-Markovian generative process that still uses the exact same neural network trained for DDPM. The core insight is that the DDPM training objective doesn't strictly enforce the specific Markov chain used for DDPM sampling; it primarily trains the network to predict the noise .
DDIM uses this by designing a sampling process that can take larger "jumps" back towards the original data . Instead of needing to compute all intermediate steps , DDIM allows sampling using a smaller subset of steps, say . For instance, you might use only 50 or 100 steps instead of 1000, significantly accelerating generation.
A remarkable property of DDIM is its ability to produce deterministic outputs when a specific parameter (often denoted , eta) is set to 0. Given the same initial noise and the same sequence of timesteps, DDIM with will always produce the exact same final sample . This contrasts with DDPM, which always involves adding random noise at each step (controlled by the variance ), making its output inherently stochastic. When , DDIM reintroduces stochasticity, with typically recovering behavior very similar to DDPM. This control over determinism can be useful for applications requiring reproducible results or exploring interpolations in the latent space.
In essence, DDIM provides a generalized family of sampling processes, with DDPM being a specific instance. It achieves faster sampling by modifying the update rule used to estimate from , allowing for larger, potentially deterministic steps. The trade-off is that while DDIM is much faster, the sample quality using very few steps might sometimes be slightly lower than DDPM run for its full duration, although often DDIM produces excellent results with significantly fewer steps (e.g., 50-200).
The next section will detail the specific mathematical formulation and algorithm for DDIM sampling, highlighting how it differs from the DDPM update rule you saw earlier.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with