When deploying applications in Kubernetes, exposing your services to external users becomes a key requirement. Two commonly used methods to achieve this are:
- LoadBalancer Services
- Ingress Controllers
Although both provide access to your Kubernetes applications from outside the cluster, they work in different ways and serve different purposes. In this post, we’ll explore what each is, how they work, and when to use them — complete with examples.
LoadBalancer
A LoadBalancer
type service in Kubernetes provisions an external load balancer (via your cloud provider) and assigns a public IP to your service. This is the easiest way to expose a single service to the internet.
Example: LoadBalancer Service
Suppose you have a simple web application running in a Kubernetes cluster:
apiVersion: v1
kind: Service
metadata:
name: my-web-service
spec:
selector:
app: my-web-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancer
How it Works
- Kubernetes contacts the cloud provider’s API to create a cloud load balancer (like AWS ELB, GCP’s Load Balancer).
- This load balancer gets a public IP and routes external traffic to your service.
- Traffic from the load balancer is routed to one of the pods behind the service.
Pros
- Easy to set up.
- Great for exposing a single service.
- Direct integration with cloud provider load balancers.
Cons
- Each service of type
LoadBalancer
provisions a new external load balancer — expensive! - Doesn’t support path-based routing, SSL termination, etc.
Ingress
Ingress is an API object that manages external access to services in a cluster, typically via HTTP/HTTPS. You configure routing rules based on the request host and path.
Components
- Ingress Resource: Defines rules (e.g., route
/api
to Service A,/web
to Service B). - Ingress Controller: Software that reads the Ingress rules and configures a reverse proxy (e.g., NGINX, Traefik) accordingly.
Example: Ingress with NGINX
Let’s assume two services: api-service
(on port 8080) and web-service
(on port 8081).
Deploy the Ingress Resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 8080
- path: /web
pathType: Prefix
backend:
service:
name: web-service
port:
number: 8081
Make sure you have an Ingress Controller (like NGINX) installed and running.
How it Works
- The Ingress Controller listens for changes in Ingress resources.
- It configures its reverse proxy to route traffic based on the rules.
- A single LoadBalancer or NodePort exposes the Ingress Controller, and it handles internal routing.
Pros
- Efficient: One LoadBalancer serves many services.
- Supports advanced routing (path/host-based), TLS termination.
- Easy to manage multiple services under the same domain.
Cons
- Requires setting up and managing an Ingress Controller.
- Slightly more complex for beginners.
- Not ideal for non-HTTP traffic.
Ingress vs LoadBalancer: Side-by-Side Comparison
Feature | LoadBalancer Service | Ingress |
---|---|---|
Traffic Type | Layer 4 (TCP/UDP) | Layer 7 (HTTP/HTTPS) |
Setup Complexity | Simple | Moderate (needs controller) |
Use Case | Expose a single service | Expose multiple services via one IP |
Path-based Routing | ❌ Not supported | ✅ Supported |
TLS/SSL Termination | ❌ Manual in app or external LB | ✅ Built-in with Ingress Controller |
External IPs Required | One per service | One per ingress controller |
Cost in Cloud Environments | High (per service LB cost) | Lower (shared LB) |
Best For | Quick single-service exposure | Hosting multiple services, cost savings |
When Should You Use Each?
Use LoadBalancer
when:
- You need to expose a single service quickly.
- You’re dealing with non-HTTP traffic (TCP, UDP).
- You don’t want to manage an Ingress Controller.
Use Ingress
when:
- You need path-based or host-based routing.
- You want to expose multiple services via one LoadBalancer.
- You need TLS/SSL termination centrally.
- You want cost savings on public IPs.
Real-World Example Use Case
Let’s say you are building a SaaS platform:
api.myapp.com
→ Backend APIsweb.myapp.com
→ Frontendadmin.myapp.com
→ Admin portal
Using a LoadBalancer for each would result in 3 separate public IPs and load balancers, which is expensive.
With Ingress, you can route all this through one NGINX Ingress controller, manage TLS in one place, and save cost.
Final Thoughts
Both LoadBalancer and Ingress serve the same end goal — exposing Kubernetes services externally — but they cater to different needs. If you’re starting small or handling simple services, LoadBalancer is straightforward. But as your platform grows, Ingress offers the scalability, flexibility, and cost-efficiency you’ll likely need.