While Pods are the fundamental execution units in Kubernetes, managing them individually is not a practical strategy for running applications. A standalone Pod will not be rescheduled if its node fails, and scaling requires you to manually create or delete Pods one by one. This manual process is inefficient and does not provide the resilience expected from a modern container orchestrator.
To automate workload management, Kubernetes uses controllers. A controller is an active reconciliation process that tracks at least one Kubernetes resource type. These objects have a spec field that represents the desired state. The controller's job is to make the current state match the desired state.
This chapter introduces the controllers that manage stateless application workloads. We will cover how to:
By the end of this chapter, you will move from managing individual Pods to managing the entire application lifecycle with higher-level abstractions that provide scaling, self-healing, and automated updates.
3.1 Introduction to Kubernetes Controllers
3.2 ReplicaSets for Pod Availability
3.3 Deployments for Application Rollouts
3.4 Defining a Deployment Manifest
3.5 Executing Deployment Updates and Rollbacks
3.6 Inspecting Deployment Status
3.7 Hands-on Practical: Creating and Updating Deployments
© 2026 ApX Machine LearningEngineered with