The Influence of KL Regularization on Disentanglement
Was this section helpful?
Auto-Encoding Variational Bayes, Diederik P Kingma, Max Welling, 2014International Conference on Learning Representations (ICLR)DOI: 10.48550/arXiv.1312.6114 - Introduces the Variational Autoencoder (VAE) framework and the Evidence Lower Bound (ELBO) objective, which includes the Kullback-Leibler (KL) divergence term for variational inference.
β-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework, Irina Higgins, Loïc Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner, 2017International Conference on Learning Representations (ICLR)DOI: 10.48550/arXiv.1804.03599 - Presents the β-VAE, which explicitly modifies the weighting of the KL divergence term in the VAE objective to promote disentangled representations, addressing the balance between reconstruction and regularization.
Understanding Disentangling in VAEs, Christopher P. Burgess, Loïc Matthey, Nicholas Watters, Dustin J. Brady, Arka Pal, Gabriel Dalcin, Irina Higgins, and Alexander Lerchner, 2018Advances in Neural Information Processing Systems (NeurIPS)DOI: 10.48550/arXiv.1804.03599 - Offers an analysis of how VAEs and their variants achieve disentangled representations, examining the effect of regularization and dataset characteristics.
Deep Variational Information Bottleneck, Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, Kevin Murphy, 2018International Conference on Learning Representations (ICLR) (International Conference on Learning Representations (ICLR)) - Connects deep variational autoencoders to the Information Bottleneck theory, providing a theoretical explanation for how the KL term facilitates efficient information compression in latent spaces.