Information Bottleneck Theory and VAEs for Disentanglement
Was this section helpful?
The Information Bottleneck Method, Naftali Tishby, Fernando C. Pereira, William Bialek, 1999Advances in Neural Information Processing Systems 12 (NIPS 1999), Vol. 12DOI: 10.48550/arXiv.physics/0004057 - Foundational paper introducing the Information Bottleneck principle, which provides a theoretical framework for compression and relevant information extraction.
Auto-Encoding Variational Bayes, Diederik P Kingma, Max Welling, 2013International Conference on Learning Representations (ICLR 2014)DOI: 10.48550/arXiv.1312.6114 - Introduces the Variational Autoencoder (VAE) framework, a basis for generative models and representation learning, which serves as the foundation for the methods discussed.
β-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework, Irina Higgins, Loïc Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner, 2017International Conference on Learning Representations (ICLR 2017) - Presents the β-VAE, which explicitly modifies the VAE objective to encourage disentanglement by scaling the KL divergence term, directly relating to the Information Bottleneck principle.
Deep Variational Information Bottleneck, Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, Kevin Murphy, 2017Proceedings of the International Conference on Learning Representations (ICLR) 2017DOI: 10.48550/arXiv.1612.00410 - Connects the Information Bottleneck principle to deep learning, proposing a variational approximation that parallels VAEs, and explicitly formulates the IB objective for neural networks.