Having established a foundation with PyTorch Tensors and the Autograd system for gradient computation, we now turn to constructing the neural networks themselves. This chapter focuses on the torch.nn
package, PyTorch's dedicated library for building network architectures efficiently.
You will learn to use the core nn.Module
class as the blueprint for your models. We will assemble networks using common building blocks provided by PyTorch, including linear (nn.Linear
), convolutional (nn.Conv2d
), and recurrent (nn.RNN
) layers. We'll integrate essential components like activation functions (e.g., ReLU, Sigmoid) to introduce non-linear processing capabilities. Furthermore, you'll learn how to define objective measures using loss functions from torch.nn
(like MSELoss or CrossEntropyLoss) and how to select an appropriate optimization algorithm from torch.optim
(such as SGD or Adam) to iteratively refine your model's parameters during training. By the end of this chapter, you'll be able to define and instantiate your own basic neural networks in PyTorch.
4.1 The `torch.nn.Module` Base Class
4.2 Defining Custom Network Architectures
4.3 Common Layers: Linear, Convolutional, Recurrent
4.4 Activation Functions (ReLU, Sigmoid, Tanh)
4.5 Sequential Containers for Simple Models
4.6 Loss Functions (`torch.nn` losses)
4.7 Optimizers (`torch.optim`)
4.8 Practice: Building a Simple Network
© 2025 ApX Machine Learning