You have learned about PyTorch's core components, including Tensors, automatic differentiation with Autograd, defining models using torch.nn
, and implementing data loading and training procedures. This chapter builds upon that foundation by introducing how to construct specific, widely used neural network architectures.
We will focus on two fundamental types of models:
nn.Conv2d
and nn.MaxPool2d
. We will also address how to manage the input and output shapes of these layers.nn.RNN
layer and discuss the specific data format required for sequential inputs in PyTorch. A brief mention of more advanced variants like LSTMs and GRUs will also be included.By the end of this chapter, you will be able to construct simple versions of these common architectures in PyTorch, preparing you to tackle more complex models later.
7.1 Convolutional Neural Networks (CNNs) Overview
7.2 Building a Simple CNN in PyTorch
7.3 Understanding Input/Output Shapes for CNN Layers
7.4 Recurrent Neural Networks (RNNs) Overview
7.5 Building a Simple RNN in PyTorch
7.6 Handling Sequential Data Input for RNNs
7.7 Brief Mention of LSTM and GRU
7.8 Practice: Implementing Basic CNN and RNN
© 2025 ApX Machine Learning