Stabilizing GAN training often involves modifying the loss function's underlying distance metric (like in WGANs) or applying regularization techniques (like Spectral Normalization). Relativistic GANs offer a different perspective on improving stability by fundamentally changing what the discriminator is asked to predict.
In a standard GAN formulation, the discriminator tries to estimate the absolute probability that a given input is real. Its output is often interpreted as . The generator is then trained to produce samples that maximize this probability .
Relativistic GANs propose that it might be more effective and stable for the discriminator to predict the relative probability that given real data is more realistic than fake data, sampled randomly. Instead of outputting an absolute score, the discriminator's task becomes comparative.
Let represent the output of the discriminator before the final activation function (e.g., sigmoid). In a standard GAN (SGAN), the discriminator loss involves terms like and , where is the sigmoid function.
A particularly effective variant is the Relativistic average GAN (RaGAN). Instead of comparing a single real sample to a single fake sample, RaGAN compares a sample (real or fake) against the average assessment of samples from the opposing distribution.
The core idea is formalized in the RaSGAN (Relativistic average Standard GAN) loss functions. The discriminator is trained to maximize:
where:
Here, is the average discriminator output for fake samples in the batch, and is the average discriminator output for real samples in the batch. The discriminator is learning to make larger than the average , and smaller than the average .
The generator is trained to minimize the opposite objective:
Notice the symmetry. The generator benefits both from increasing the perceived realism of its generated samples relative to the average real sample () and decreasing the perceived realism of real samples relative to the average fake sample (). This structure provides gradients to the generator based on both real and fake samples, which can lead to more stable learning.
Implementing RaGAN involves modifying the loss calculation:
Relativistic GANs, particularly RaGAN, provide a different approach to GAN training. By shifting the discriminator's task from absolute to relative realism assessment, they offer a practical method for achieving more stable and effective training, adding another valuable technique to your toolkit for building advanced generative models. While techniques like WGAN-GP or Spectral Normalization address stability through distance metrics or regularization, RaGAN modifies the fundamental objective of the discriminator-generator game itself.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with