趋近智
Kubernetes Services provide a stable endpoint for a group of Pods. Not all services, however, need to be exposed in the same way. A database used by a backend API has very different networking requirements than a public-facing web server. Kubernetes addresses these varied needs by offering different types of Services, controlled by the spec.type field in the Service manifest. Each type provides a distinct mode of access. This analysis covers the three primary types: ClusterIP, NodePort, and LoadBalancer.
The ClusterIP is the default Service type in Kubernetes. When you create a Service without explicitly specifying a type, you get a ClusterIP. This type exposes the Service on an internal IP address that is only reachable from within the cluster.
This is the most common Service type and is ideal for enabling communication between different microservices inside your cluster. For example, a web application frontend might need to communicate with a backend user-authentication service. Since the authentication service should never be exposed to the public internet, a ClusterIP Service is the perfect fit. It provides a reliable internal endpoint that other Pods can access, but it remains completely isolated from external traffic.
Here is a minimal manifest for a ClusterIP Service that targets Pods with the label app: my-api:
apiVersion: v1
kind: Service
metadata:
name: my-api-service
spec:
# type: ClusterIP is the default, so this line is optional
type: ClusterIP
selector:
app: my-api
ports:
- protocol: TCP
port: 80
targetPort: 8080
In this configuration, other applications inside the cluster can reach the API Pods by sending requests to my-api-service:80, and Kubernetes will handle load balancing across the matching Pods.
The
ClusterIPservice provides a private virtual IP, making it accessible only to other workloads running within the same Kubernetes cluster.
The NodePort Service type is a straightforward way to expose your application to external traffic. When you set the type to NodePort, Kubernetes allocates a static port from a pre-configured range (typically 30000–32767) on every worker node in the cluster. The Service then becomes accessible from outside the cluster by targeting any node's IP address on that allocated port: <NodeIP>:<NodePort>.
When you create a NodePort Service, Kubernetes automatically creates a ClusterIP Service as well. This means the Service is still available for internal communication via its clusterIP, while also being exposed externally through the node's port.
This type is often used for development and testing environments or for applications where a dedicated cloud load balancer is not necessary. However, it's less common for production web services because you need to manage which node IP to connect to, and a node going down can disrupt access if clients are not configured correctly.
Here is an example manifest for a NodePort Service:
apiVersion: v1
kind: Service
metadata:
name: my-webapp-service
spec:
type: NodePort
selector:
app: my-webapp
ports:
- protocol: TCP
port: 80
targetPort: 8080
# nodePort: 30080 # This is optional. If omitted, Kubernetes assigns a random port.
With this configuration, you can access the my-webapp application from outside the cluster by navigating to http://<any-node-ip>:30080.
Traffic directed to the
NodePorton any worker node is forwarded by kube-proxy to the internalClusterIPof the Service, which then routes it to one of the target Pods.
The LoadBalancer Service type is the standard and most reliable way to expose an application to the internet. This type is designed to integrate with the infrastructure of cloud providers like AWS, Google Cloud, and Azure.
When you create a Service of type LoadBalancer, you are asking the underlying cloud platform to provision an external network load balancer. This load balancer is given a stable, public IP address and is automatically configured to route external traffic to your Service's NodePorts on the worker nodes.
A LoadBalancer Service builds on the other two types. When it is created, Kubernetes automatically creates both a NodePort and a ClusterIP Service to handle the internal routing. The cloud load balancer simply provides the stable, public entry point.
This is the preferred method for production-facing applications running in a supported cloud environment. It handles health checks and ensures traffic is only sent to healthy nodes, providing a more resilient setup than NodePort alone. Be aware that this type is provider-dependent and will not function in on-premises or local development environments like Kind unless you install a specific controller that can simulate a load balancer, such as MetalLB.
The manifest is very simple:
apiVersion: v1
kind: Service
metadata:
name: my-production-app-service
spec:
type: LoadBalancer
selector:
app: my-production-app
ports:
- protocol: TCP
port: 80
targetPort: 80
After applying this manifest in a cloud environment, the Service's status will eventually update to include an external IP address, which is the public entry point for your application.
A cloud provider's load balancer directs traffic to the
NodePorton the cluster's nodes, providing a single, stable public IP address for external clients.
Choosing the right Service type is a matter of understanding your application's access requirements.
| Type | Access Scope | Mechanism | Common Use Case |
|---|---|---|---|
ClusterIP |
Internal only | A single virtual IP inside the cluster. | Backend services, databases, internal APIs. |
NodePort |
External via Node IP | A static port is opened on every node. | Development, testing, demos, or simple services. |
LoadBalancer |
External via Public IP | Provisions an external load balancer (cloud). | Production web applications needing public access. |
With a firm grasp of these Service types, you can effectively control network traffic flow both within your cluster and from the outside. The next step is to learn how applications use these stable endpoints for service discovery.
这部分内容有帮助吗?
ClusterIP、NodePort 和 LoadBalancer 等服务类型、其配置以及底层网络机制。© 2026 ApX Machine Learning用心打造