Kubernetes Services assign stable IP addresses and ports to a set of Pods. After a Service is established, applications within the cluster need a reliable way to discover it. Hardcoding the ClusterIP of a Service into your application code is not a sustainable practice, as these IPs can change if a Service is deleted and recreated. Kubernetes provides a built-in mechanism for this purpose: a cluster-internal DNS system.
By default, a Kubernetes cluster is configured with an internal DNS service. This service, typically implemented by CoreDNS, automatically creates DNS records for each new Service created in the cluster. Your application's Pods are configured to use this internal DNS server for name resolution, allowing them to locate other services simply by using their names.
The standard DNS record for a Service follows a predictable pattern:
<service-name>.<namespace-name>.svc.cluster.local
service-name: The name you gave your Service in its manifest.namespace-name: The namespace where the Service resides.svc.cluster.local: The configurable cluster domain suffix.For example, a Service named api-gateway in the production namespace would have a fully qualified domain name (FQDN) of api-gateway.production.svc.cluster.local. When an application resolves this name, the cluster DNS returns the Service's ClusterIP.
A significant advantage of this system is that applications running in the same namespace only need to use the short service-name. For instance, a Pod in the production namespace can reach the gateway by simply connecting to http://api-gateway. The DNS resolver automatically searches within the local namespace, simplifying application configuration.
The diagram below illustrates this process. A client Pod needs to communicate with a backend service. Instead of knowing the individual IP addresses of the backend Pods, it performs a DNS lookup for the Service name.
A client Pod resolves the
my-backendService name via the internal cluster DNS to get its ClusterIP. Traffic sent to this IP is then automatically load-balanced by kube-proxy to one of the healthy backend Pods.
This integration is handled automatically. When a Pod is scheduled onto a node, the kubelet configures its container networking. Part of this setup involves modifying the container's /etc/resolv.conf file. This file tells the operating system's networking stack where to send DNS queries.
A typical /etc/resolv.conf inside a Pod looks like this:
nameserver 10.96.0.10
search my-namespace.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
Let's break this down:
nameserver 10.96.0.10: This is the ClusterIP of the cluster's DNS service itself. All DNS queries from this Pod will go to this address.search ...: This defines the DNS search path. When you try to resolve a short name like api-gateway, the system will try appending each of these domains in order until it finds a match. This is how a query for api-gateway from a Pod in the my-namespace namespace is successfully resolved to api-gateway.my-namespace.svc.cluster.local.You can verify this yourself by running a shell inside a Pod and examining the file:
# Start a temporary busybox pod to use as a client
$ kubectl run dns-test --image=busybox:1.28 --restart=Never -it -- rm
# Once inside the pod's shell, inspect resolv.conf
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
# Use nslookup to resolve the kubernetes service in the same namespace
/ # nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1
Before the DNS-based approach became standard, Kubernetes used environment variables for service discovery. When a new Pod is created, the kubelet injects a set of environment variables for every Service that already exists in the cluster.
For a Service named my-api exposing port 8080, a new Pod would see variables like:
MY_API_SERVICE_HOST=10.101.45.12
MY_API_SERVICE_PORT=8080
MY_API_PORT=tcp://10.101.45.12:8080
MY_API_PORT_8080_TCP=tcp://10.101.45.12:8080
MY_API_PORT_8080_TCP_PROTO=tcp
MY_API_PORT_8080_TCP_PORT=8080
MY_API_PORT_8080_TCP_ADDR=10.101.45.12
This method has a significant ordering dependency: the Service must be created before any client Pods that need to connect to it. If you create a client Pod first and then create the Service, the client Pod will not receive the environment variables and will be unable to find the Service.
Because of this limitation, using the internal cluster DNS is the recommended and far more flexible approach for service discovery. It allows your components to be created in any order and to discover each other dynamically.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with