In the previous chapter, we established how to represent graph data numerically. The next logical step is to understand how a neural network can operate on this structure. Unlike data with a fixed grid-like topology such as images, graphs require a specialized approach. This chapter introduces the operational mechanism of most Graph Neural Networks: a process known as message passing.
We will break down this mechanism into its components. You will learn how a node collects information from its immediate neighbors (aggregation) and then uses that information to modify its own representation (update). We will formalize this process with a general mathematical formula and examine common functions for both steps. We will also cover why this design is inherently suited for graph data by discussing permutation invariance. Finally, you will see how stacking multiple message passing layers enables a GNN to capture information from nodes that are more than one hop away, expanding its receptive field across the graph.
Throughout the chapter, we will use mathematical notation to define these operations. For instance, a single GNN layer can be generally expressed as:
hv(l+1)=UPDATE(l)(hv(l),AGGREGATE(l)({hu(l):u∈N(v)}))where hv(l) is the feature vector of node v at layer l, and N(v) represents the set of neighbors of node v. To solidify these ideas, the chapter concludes with a practical exercise where you will implement a basic message passing layer from scratch using NumPy.
2.1 The Neighborhood Aggregation Idea
2.2 A General GNN Layer: Aggregate and Update
2.3 Common Aggregation Functions
2.4 Update Functions and Non-Linearities
2.5 Permutation Invariance and Equivariance
2.6 Stacking Layers to Form a Deep GNN
2.7 Practice: A Simple GNN Layer with NumPy
© 2026 ApX Machine LearningEngineered with