The Kubernetes control plane acts as the brain of a cluster, making decisions and managing its overall state. Worker nodes, conversely, are the muscles that perform the actual work. Every worker node is a machine, either physical or virtual, responsible for running the containerized applications you deploy. To accomplish this, each node runs a set of essential components that receive instructions from the control plane and manage the lifecycle of workloads on that specific machine.
Let's examine the three core components that run on every worker node.
The Kubelet is the primary agent running on each worker node. Its main responsibility is to ensure that the containers described in Pod specifications (PodSpecs) are running and healthy. The Kubelet doesn't manage containers it wasn't instructed to create by the Kubernetes API Server.
It operates as a communication bridge:
The container runtime is the software responsible for running containers. While Docker was an early popular choice, Kubernetes supports any runtime that conforms to its Container Runtime Interface (CRI). This allows for flexibility and the use of more lightweight and efficient runtimes like containerd or CRI-O.
The Kubelet communicates with the container runtime to handle all container-level operations:
Essentially, the Kubelet translates the abstract Pod definition into concrete actions for the container runtime to execute.
A worker node's components in action. The API Server sends instructions to the Kubelet, which directs the container runtime to manage Pods. Kube-proxy handles the network routing for these Pods.
Networking between Pods, which may be spread across multiple nodes, is a fundamental requirement. The Kubernetes proxy, or kube-proxy, is a network proxy that runs on each node and is a critical part of the Kubernetes networking model.
Its primary function is to maintain network rules on the host operating system. These rules, often implemented using iptables or IPVS on Linux, allow network traffic to be correctly routed to the Pods. When you create a Kubernetes Service, which provides a stable IP address for a group of Pods, it is kube-proxy that translates the Service's virtual IP into the actual IP addresses of the backing Pods and load balances traffic among them.
These three components work together to bring your applications to life. The Kubelet acts as the local supervisor, the container runtime handles the low-level container mechanics, and kube-proxy manages the network connectivity. This distributed architecture allows Kubernetes to scale horizontally. By adding more worker nodes, you provide more resources for the control plane to manage, enabling the cluster to run a greater number of applications without manual intervention on individual machines.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with