Why Fully-Connected Autoencoders Fall Short for Images
Was this section helpful?
Deep Learning, Ian Goodfellow, Yoshua Bengio, and Aaron Courville, 2016 (MIT Press) - Comprehensive guide to deep learning, covering fully-connected networks, autoencoders, and convolutional neural networks, explaining their architectures and the reasons for convolutions in image processing.
Gradient-Based Learning Applied to Document Recognition, Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner, 1998Proceedings of the IEEE, Vol. 86 (IEEE)DOI: 10.1109/5.726791 - Foundational work introducing Convolutional Neural Networks (CNNs) and their benefits for image processing, such as local receptive fields and weight sharing, which address the limitations of fully-connected networks for images.
CS231n: Convolutional Neural Networks for Visual Recognition - Lecture Notes, Fei-Fei Li, Ehsan Adeli, Justin Johnson, Zane Durante, 2024 - High-quality lecture notes offering explanations of convolutional networks, their advantages over fully-connected layers for image data, and concepts like spatial structure preservation, weight sharing, and translation invariance.
Reducing the Dimensionality of Data with Neural Networks, Geoffrey E. Hinton and Ruslan R. Salakhutdinov, 2006Science, Vol. 313 (American Association for the Advancement of Science)DOI: 10.1126/science.1127647 - A seminal paper introducing autoencoders for dimensionality reduction, providing the base understanding of the architecture from which fully-connected autoencoders are derived.