Now that we've seen that an autoencoder's main job is to reconstruct its input and that we use loss functions to measure how well it's doing, let's look at how data actually moves through an autoencoder to produce that reconstruction. This one-way journey of data from the input to the output is called forward propagation.
Imagine you're sending a message (X) through a series of translators and summarizers, and then back through expanders and translators to try and get the original message (X′) back. Forward propagation is this entire send-and-receive process.
Here's a step-by-step breakdown of how data flows:
Entering the Input Layer: Your data, whether it's an image, a set of numbers, or sensor readings, first enters the input layer. This layer doesn't do any computation; it simply passes the raw data into the first part of the autoencoder, the encoder. Let's call our input data X.
Journey Through the Encoder: The data then travels through the encoder. The encoder is typically made up of one or more "hidden layers." Each layer contains processing units (often called neurons).
Reaching the Bottleneck (Latent Space): After passing through all the encoder layers, the data arrives at the bottleneck layer, also known as the latent space. This layer has the smallest number of neurons in the autoencoder. The output of this layer is the compressed representation of the input data, often denoted as z. This z is a dense summary, capturing the most important features of the input in a lower-dimensional space.
Expansion Through the Decoder: The compressed representation z is then fed into the decoder. The decoder's structure is usually a mirror image of the encoder. It takes the compact information from the bottleneck and starts to expand it back towards the original data's shape.
Producing the Output Layer: Finally, the data passes through the last layer of the decoder, called the output layer. The output of this layer is the autoencoder's reconstruction of the original input data. We can call this reconstructed data X′. The number of neurons in the output layer must match the number of features (e.g., pixels in an image, columns in a dataset) in the original input layer.
This entire process, from X to X′, is one "forward pass" or "forward propagation." The autoencoder takes the input, pushes it through its network of layers and calculations, and produces an output.
The diagram shows the path data takes during forward propagation. It starts as input (X), gets compressed by the encoder into a latent representation (z) at the bottleneck, and then the decoder attempts to reconstruct the original data (X′) from this representation.
The output X′ generated by this forward propagation is what we then compare to the original input X. The difference between them, as we discussed with loss functions, tells us how well the autoencoder is performing its reconstruction task. This error measure is then used in the next step, backpropagation, to adjust the autoencoder's weights and biases, which we'll cover next.
Was this section helpful?
© 2025 ApX Machine Learning