While Service objects like NodePort and LoadBalancer provide functional ways to expose your application, they often come with operational trade-offs. A NodePort exposes your application on a high, arbitrary port on every node, which is not ideal for standard web traffic. A LoadBalancer Service, for instance, typically provisions a dedicated, cloud-specific L4 load balancer for each service. This approach can become costly and unwieldy as your application grows to include many microservices. For web applications that communicate over HTTP and HTTPS, a more intelligent, application-layer routing mechanism is needed.
This is where the Ingress resource provides a more sophisticated solution. An Ingress is not a type of Service. Instead, it is an API object that acts as a collection of routing rules for your cluster. It allows you to define how external traffic, primarily HTTP and HTTPS, should be directed to internal Services based on hostnames or URL paths. This provides a single, stable entry point for your entire application stack, consolidating routing logic and simplifying external access management.
It is important to understand that an Ingress resource on its own does nothing. It is a passive set of rules. To activate these rules, you need an Ingress controller running in your cluster. The Ingress controller is the actual software, a specialized workload running in Pods, that listens for Ingress resources defined in the cluster and configures a reverse proxy (like NGINX, HAProxy, or Traefik) to implement the specified routing.
Most Kubernetes clusters do not come with an Ingress controller installed by default. You or your cluster administrator must choose and deploy one. Once an Ingress controller is running, it watches the Kubernetes API server for any new or updated Ingress resources and reconfigures itself accordingly.
The traffic flow typically looks like this: an external client sends a request to a single load balancer that directs traffic to the Ingress controller. The controller then inspects the request's hostname and path, consults its routing rules derived from Ingress resources, and forwards the traffic to the appropriate internal Service and its backing Pods.
An Ingress controller receives all incoming traffic and routes it to different internal services based on rules you define in an Ingress resource.
An Ingress resource is defined using a YAML manifest, just like other Kubernetes objects. Its specification contains rules for routing traffic. These rules can be based on the request's hostname (host-based routing) or its URL path (path-based routing).
Let's examine a manifest that defines path-based routing. Here, traffic to example.com/api is sent to one service, while traffic to example.com/ is sent to another.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-application-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 8080
- path: /
pathType: Prefix
backend:
service:
name: ui-service
port:
number: 80
Let's break down the spec section:
rules: A list of routing rules. Each rule applies to a specific host, though a host is not specified here, meaning it applies to all inbound traffic.http.paths: A list of paths and their corresponding backends.path: The URL path to match. In this example, /api and /.pathType: Specifies how the path should be matched. Prefix matches any URL that starts with the specified path. Exact requires an exact match.backend.service: Defines the destination Service for traffic matching the path. It specifies the name of the Service and the port number to connect to.Annotations, like nginx.ingress.kubernetes.io/rewrite-target, are often used to provide additional configuration specific to the Ingress controller being used.
You can also configure host-based routing, often called virtual hosting. This allows you to direct traffic for different hostnames to different services, all through the same external IP address managed by the Ingress controller.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: virtual-host-ingress
spec:
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 8080
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ui-service
port:
number: 80
In this example, requests to api.example.com are routed to api-service, while requests to app.example.com are routed to ui-service.
Another significant capability of Ingress is managing TLS termination. Instead of configuring TLS certificates in each individual application Pod, you can centralize this responsibility at the Ingress controller. This simplifies certificate management and means that traffic within your cluster, from the controller to the Pods, can be standard, unencrypted HTTP.
To enable TLS, you first need to store your certificate and private key in a Kubernetes Secret.
kubectl create secret tls my-tls-secret --cert=path/to/tls.crt --key=path/to/tls.key
Then, you reference this secret in your Ingress manifest:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: secure-ingress
spec:
tls:
- hosts:
- app.example.com
secretName: my-tls-secret
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ui-service
port:
number: 80
The tls section in the spec instructs the Ingress controller to use the certificate from my-tls-secret for any traffic destined for app.example.com. The controller will handle the TLS handshake with the client, and then forward the decrypted traffic to the ui-service.
By using Ingress, you gain a powerful and flexible method for managing external access to your applications. It consolidates entry points, reduces costs associated with multiple load balancers, and centralizes complex routing and security logic at the edge of your cluster.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with