A Service manifest defines the network endpoint for your application. This declarative YAML file describes a stable IP address and DNS name that Kubernetes will maintain, automatically routing traffic to the correct set of backend Pods even as they are created, destroyed, or rescheduled.
A Service manifest adheres to the standard structure of Kubernetes objects, containing four top-level fields: apiVersion, kind, metadata, and spec.
apiVersion: For Services, this is always v1.kind: This must be set to Service.metadata: Contains data that helps uniquely identify the Service object, such as its name.spec: This is where you define the desired characteristics of the Service, including how it finds Pods and which ports it exposes.Let's examine a basic manifest for a ClusterIP Service, the default type, which exposes the Service on an internal IP within the cluster.
# service-clusterip-example.yaml
apiVersion: v1
kind: Service
metadata:
name: my-backend-service
spec:
type: ClusterIP # This is the default and can be omitted
selector:
app: my-backend
ports:
- protocol: TCP
port: 80
targetPort: 8080
spec)The spec field is the heart of the Service manifest and contains the configuration that brings the network abstraction to life. The two most significant fields within the spec are selector and ports.
The selector field is the mechanism that links a Service to a group of Pods. It contains a set of key-value pairs that must match the labels on the Pods you wish to target. When a request arrives at the Service's IP address and port, Kubernetes consults this selector, identifies all running Pods with matching labels, and forwards the traffic to one of them.
This label-selector pattern is a fundamental design principle in Kubernetes. It decouples the Service from the Pods. The Service does not need to know the individual IP addresses of the Pods, which are ephemeral. It only needs to know the label that identifies the group.
The selector (
app: my-backend) on the Service ensures that it only directs traffic to Pods with the corresponding label, effectively creating a stable endpoint for a dynamic group of Pods.
The ports field is an array of objects, each defining a specific port mapping for the Service. This allows a single Service to expose multiple ports. Each entry in the array must specify port and targetPort.
port: This is the port number that the Service will expose on its own virtual IP (ClusterIP). Other applications inside the cluster will connect to the Service using this port. In our example, other Pods would connect to my-backend-service on port 80.
targetPort: This is the port on the container inside the target Pods where the application is listening. Traffic arriving at the Service's port will be forwarded to this targetPort on a selected Pod. In the example, traffic to port 80 on the Service is sent to port 8080 inside the Pods. The targetPort can be a number or the name of a port defined in the Pod specification, a practice which can make your configurations more flexible.
protocol: Specifies the network protocol, which defaults to TCP. UDP and SCTP are also valid options.
The behavior of a Service is primarily controlled by the type field in its spec. While you have already been introduced to the different types, seeing them defined in a manifest makes their purpose clearer.
A NodePort Service exposes the application on a static port on each worker node's IP address. This is useful for exposing an application during development or for services that do not require a cloud load balancer.
# service-nodeport-example.yaml
apiVersion: v1
kind: Service
metadata:
name: my-webapp-service
spec:
# Expose this Service on each Node's IP at a static port.
type: NodePort
selector:
app: my-webapp
ports:
- protocol: TCP
# Port on the Service's internal ClusterIP
port: 80
# Port on the container
targetPort: 8000
# Static port on the Node's IP. If not specified, Kubernetes assigns one.
nodePort: 30080
With this manifest, you could access your web application from outside the cluster by navigating to http://<any-node-ip>:30080.
A LoadBalancer Service is the standard way to expose an application to the internet in a cloud environment. When you create a Service of this type, the cloud provider's integration with Kubernetes will automatically provision an external load balancer and assign it a public IP address.
# service-loadbalancer-example.yaml
apiVersion: v1
kind: Service
metadata:
name: my-api-gateway
spec:
# Provision an external load balancer (requires cloud provider support).
type: LoadBalancer
selector:
component: api-gateway
ports:
- protocol: TCP
# Port exposed by the external load balancer.
port: 80
# Port on the target Pods.
targetPort: 8080
After applying this manifest in a cloud like AWS, GCP, or Azure, an external load balancer would be created. Traffic sent to its public IP on port 80 would be routed to your api-gateway Pods on port 8080.
By mastering the Service manifest, you gain precise control over your application's network identity within the cluster. You can define stable, discoverable endpoints that decouple your microservices, enabling them to communicate reliably in a dynamic environment.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with