While a Pod can contain multiple containers, the most common pattern is the single-container Pod. This model treats the Pod as a lightweight wrapper around a single application, which aligns well with the philosophy of running one process per container. This approach provides isolation and straightforward resource management, making it sufficient for a wide range of stateless and stateful applications.
Consider a simple web server. Its sole responsibility is to serve HTTP traffic. Encapsulating it within a single-container Pod is the cleanest and most direct way to deploy it on Kubernetes.
Here is a minimal YAML manifest for a Pod running a single Nginx container:
apiVersion: v1
kind: Pod
metadata:
name: nginx-web-server
spec:
containers:
- name: nginx
image: nginx:1.23
ports:
- containerPort: 80
In this definition, the spec.containers field is an array, but it contains only one element. Kubernetes will schedule this Pod, pull the nginx:1.23 image, and run a single container from it. This one-to-one relationship between the Pod and the application container is the foundation for most workloads.
The single-container model is simple and effective, but it cannot address all use cases. Some applications are composed of tightly coupled processes that benefit from running together on the same machine. This is where multi-container Pods become necessary.
All containers within a Pod share the same network namespace and can share the same storage volumes. This means they can communicate with each other using localhost and can read and write to the same files. Think of a multi-container Pod as a small, logical host where each container is a process running on that host. This co-location is the basis for several established design patterns.
Containers within a Pod share resources, enabling patterns where a sidecar container enhances the main application by accessing its files and network.
The most common multi-container pattern is the sidecar. A sidecar container extends or enhances the functionality of the main application container. Its purpose is to offload a secondary task, such as logging, monitoring, or proxying requests, without altering the application code.
A classic example is a log shipping sidecar. The main application writes its logs to a file on a shared volume. The sidecar container runs a separate process, like Fluentd or Logstash, which tails this log file and forwards the entries to a centralized logging service.
Here is a manifest for a Pod with a web application and a log-shipping sidecar:
apiVersion: v1
kind: Pod
metadata:
name: web-app-with-sidecar
spec:
containers:
- name: main-app
image: busybox
# This command simulates an app writing logs every 5 seconds
command: ["/bin/sh", "-c", "while true; do echo \"Log entry: $(date)\" >> /var/log/app.log; sleep 5; done"]
volumeMounts:
- name: log-volume
mountPath: /var/log
- name: sidecar-logger
image: busybox
# This command simulates the sidecar reading the logs
command: ["/bin/sh", "-c", "tail -f /var/log/app.log"]
volumeMounts:
- name: log-volume
mountPath: /var/log
volumes:
- name: log-volume
emptyDir: {}
In this example:
emptyDir volume named log-volume is defined for the Pod. This volume is created when the Pod is scheduled and deleted when the Pod is terminated.main-app container mounts this volume at /var/log and writes its logs to app.log.sidecar-logger container also mounts the same volume at /var/log and can therefore read the app.log file written by the main application.This separation of concerns allows you to build a standardized logging agent that can be attached to any application without requiring developers to integrate logging libraries into their code.
The adapter pattern is used to standardize the output or interface of an existing application. While a sidecar adds new functionality, an adapter transforms existing output to conform to an organization-wide or system-wide standard.
Imagine an application that emits metrics in a proprietary, non-standard format. To integrate it with a monitoring system like Prometheus, which requires a specific exposition format, you can deploy an adapter container. This container would share a volume or network with the main application, read the proprietary metrics, convert them into the Prometheus format, and expose them on a new network port for scraping. This allows you to integrate legacy or third-party applications into your modern observability stack without modifying their source code.
The ambassador pattern provides a unified interface for accessing external services. It acts as a local proxy within the Pod that routes traffic from the application container to the correct destination.
For example, an application might need to connect to a sharded database cluster. Instead of building complex connection logic and service discovery into the application itself, the application can be configured to connect to a simple endpoint like localhost:6379. The ambassador container, listening on that port, would receive the request and intelligently proxy it to the appropriate database shard. If the database topology changes, only the ambassador's configuration needs to be updated, not the application code. This simplifies application development and centralizes connection management.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with