Deployment objects provide significant advantages when updating a running application. A Deployment automates the process of rolling out changes, ensuring that the transition from one application version to another happens smoothly and with minimal or no disruption to users. This automated process replaces the error-prone manual work of stopping old Pods and starting new ones.
By default, Deployments use a RollingUpdate strategy. When you update the Pod template in a Deployment’s specification, such as changing the container image tag, the Deployment controller initiates a controlled rollout. Instead of stopping all old Pods at once, it gradually replaces them with new ones.
The process works like this:
ReplicaSet is created with the updated Pod specification.This ensures that your application remains available throughout the update, as there is always a mix of old and new Pods serving traffic.
This diagram illustrates the transition during a rolling update. The Deployment controller manages two ReplicaSets, gradually shifting from the old version to the new one to maintain availability.
You can trigger an update in two primary ways: imperatively using kubectl, or declaratively by applying an updated manifest file.
The quickest way to update an application's image is with the kubectl set image command. This command directly modifies the live Deployment object in the cluster.
Suppose you have a Deployment named webapp running an Nginx image with the tag 1.21.0. To update it to 1.22.0, you would run:
kubectl set image deployment/webapp nginx=nginx:1.22.0 --record
The --record flag is useful as it records the command in the Deployment's annotation, making it easier to see what changes were made in each revision later on.
While imperative commands are convenient for quick changes, the declarative approach is recommended for production environments and GitOps workflows. You simply modify your Deployment YAML file and apply it again.
If your original deployment.yaml looked like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 3
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: nginx
image: nginx:1.21.0 # The old image version
ports:
- containerPort: 80
You would change the image field to nginx:1.22.0 and re-apply the manifest:
kubectl apply -f deployment.yaml
Kubernetes is intelligent enough to compute the difference (a "diff") between the existing state and the new state defined in the file, triggering a rolling update only if there are changes to the Pod template.
You can fine-tune the rolling update process using two parameters in the Deployment specification: maxUnavailable and maxSurge.
maxUnavailable: Specifies the maximum number of Pods that can be unavailable during the update. It can be an absolute number (e.g., 1) or a percentage of desired replicas (e.g., 25%).maxSurge: Defines the maximum number of Pods that can be created over the desired number of replicas. This can also be an absolute number or a percentage.Here is how you would define them in your manifest:
spec:
replicas: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
With this configuration for 10 replicas, the controller will ensure that at least 9 Pods are always running (10 - 1). It will also ensure that no more than 11 Pods (10 + 1) exist at any moment during the update. These settings help balance speed and safety.
To check the progress of a rollout, use the kubectl rollout status command:
kubectl rollout status deployment/webapp
This command will provide real-time feedback, finishing only when the update is complete or has failed.
Waiting for rollout to finish: 2 of 3 new replicas have been updated...
Waiting for rollout to finish: 2 of 3 new replicas have been updated...
Waiting for rollout to finish: 2 of 3 new replicas have been updated...
Waiting for rollout to finish: 3 of 3 new replicas have been updated...
deployment "webapp" successfully rolled out
You can also use kubectl get deployments and kubectl get replicasets to see the new and old ReplicaSets being managed during the process.
Sometimes a new release introduces a bug. Deployments provide a simple mechanism to revert to a previously deployed version.
First, you can view the revision history of a Deployment:
kubectl rollout history deployment/webapp
The output will show a list of revisions, each corresponding to a previous version of the Deployment's Pod template.
REVISION CHANGE-CAUSE
1 <none>
2 kubectl set image deployment/webapp nginx=nginx:1.22.0 --record=true
The CHANGE-CAUSE column is populated from the --record flag or kubernetes.io/change-cause annotation, highlighting why it's a good practice.
To roll back to the previous version (in this case, revision 1), you can use the undo command:
kubectl rollout undo deployment/webapp
This triggers another rolling update, but this time it reverts the Pod template to the configuration of the previous revision.
If you need to revert to a specific revision, you can specify it with the --to-revision flag:
kubectl rollout undo deployment/webapp --to-revision=1
This rollback capability provides a safety net, allowing you to quickly recover from faulty deployments without manual intervention. By managing the entire application lifecycle, from deployment to updates and rollbacks, Deployments offer a resilient and automated way to run stateless applications on Kubernetes.
Was this section helpful?
kubectl command reference for managing rollouts, including commands for status, history, and undoing deployments, directly supporting the practical aspects discussed.© 2026 ApX Machine LearningEngineered with