Fix: Kubernetes Ingress Not Working (404, 502, or Traffic Not Routing)
Quick Answer
How to fix Kubernetes Ingress not routing traffic — why Ingress returns 404 or 502, how to configure annotations correctly, debug ingress-nginx and AWS ALB Ingress Controller, and verify backend service health.
The Error
Your Kubernetes Ingress resource is created but requests return 404, 502, or traffic never reaches the backend pods:
curl https://myapp.example.com/api/users
# 404 Not Found (default backend)
# or
# 502 Bad Gateway
# or
# curl: (6) Could not resolve host: myapp.example.comOr kubectl get ingress shows the Ingress with no ADDRESS:
NAME CLASS HOSTS ADDRESS PORTS AGE
my-ingress nginx myapp.example.com 80 5mOr the Ingress has an address but specific paths return 404 while others work.
Why This Happens
Kubernetes Ingress is just a configuration object — it requires an Ingress Controller to actually handle traffic. Common failure causes:
- No Ingress Controller installed — the Ingress resource exists but nothing processes it.
- Wrong
ingressClassName— the Ingress references a class that does not match the installed controller. - Backend service name or port mismatch — the Ingress points to a Service that does not exist or uses the wrong port.
- Service selector not matching pods — the Service exists but selects no pods (wrong labels).
- Path type mismatch —
PrefixvsExactpath type causes some routes to 404. - Annotation misconfiguration — controller-specific annotations (rewrite-target, SSL redirect) are wrong or missing.
- TLS secret missing or invalid — HTTPS Ingress fails when the referenced TLS secret does not exist.
Fix 1: Verify an Ingress Controller Is Running
An Ingress resource does nothing without a controller. Check if one is installed:
# Check for ingress-nginx
kubectl get pods -n ingress-nginx
kubectl get service -n ingress-nginx
# Check for any ingress controller across all namespaces
kubectl get pods -A | grep -i ingress
# Check IngressClass objects
kubectl get ingressclassInstall ingress-nginx if missing:
# Using Helm
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace
# Or using the official manifest
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.0/deploy/static/provider/cloud/deploy.yamlFor AWS (ALB Ingress Controller):
helm repo add eks https://aws.github.io/eks-charts
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=my-cluster \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controllerFix 2: Set the Correct ingressClassName
Kubernetes 1.18+ requires ingressClassName to specify which controller handles the Ingress:
# Wrong — no class specified, controller may ignore it
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80# Correct — specifies the ingress controller class
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
ingressClassName: nginx # Must match an IngressClass name
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80Find available IngressClass names:
kubectl get ingressclass
# NAME CONTROLLER PARAMETERS AGE
# nginx k8s.io/ingress-nginx <none> 10d
# alb ingress.k8s.aws/alb <none> 5dFor older clusters — use the annotation instead:
metadata:
annotations:
kubernetes.io/ingress.class: "nginx" # Older approach, still worksFix 3: Verify the Backend Service and Port
The Ingress backend must reference an existing Service with the correct port:
# Check the service exists in the same namespace
kubectl get service my-service -n my-namespace
# Check the service's port and selector
kubectl describe service my-service -n my-namespace
# Look for: Port, TargetPort, Selector, EndpointsIf Endpoints shows <none>, the Service is not selecting any pods:
kubectl get endpoints my-service -n my-namespace
# NAME ENDPOINTS AGE
# my-service <none> 5m ← No pods matched
# Check pod labels vs service selector
kubectl get pods -n my-namespace --show-labels
kubectl describe service my-service -n my-namespace | grep SelectorFix label mismatch:
# Service selector
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app # Must match pod labels exactly
version: v1
ports:
- port: 80
targetPort: 3000
---
# Pod (via Deployment) must have matching labels
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
metadata:
labels:
app: my-app # Matches service selector
version: v1 # Matches service selectorIngress backend port must match Service port (not targetPort):
# Service has port: 80, targetPort: 3000
# Ingress must reference port 80 (the Service port), not 3000
backend:
service:
name: my-service
port:
number: 80 # Service port, not container portFix 4: Fix Path Routing and PathType
Incorrect pathType is a common cause of 404 for specific paths:
# pathType: Exact — only matches /api exactly, not /api/users
- path: /api
pathType: Exact
# pathType: Prefix — matches /api, /api/users, /api/v1/...
- path: /api
pathType: Prefix
# pathType: ImplementationSpecific — behavior depends on the controller
- path: /api.*
pathType: ImplementationSpecific # ingress-nginx treats this as a regexMultiple path rules — more specific paths must come first:
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /api/admin # More specific — must come first
pathType: Prefix
backend:
service:
name: admin-service
port:
number: 80
- path: /api # Less specific
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: / # Catch-all last
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80Fix 5: Fix ingress-nginx Path Rewriting
When your backend expects requests at / but the Ingress path is /api, you need path rewriting:
# Without rewrite: GET /api/users → backend receives /api/users
# With rewrite: GET /api/users → backend receives /users
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2 # Capture group from path
spec:
ingressClassName: nginx
rules:
- host: myapp.example.com
http:
paths:
- path: /api(/|$)(.*) # Captures everything after /api/
pathType: ImplementationSpecific
backend:
service:
name: api-service
port:
number: 80Common rewrite annotations:
metadata:
annotations:
# Rewrite /api/foo → /foo
nginx.ingress.kubernetes.io/rewrite-target: /$2
# Force HTTPS redirect
nginx.ingress.kubernetes.io/ssl-redirect: "true"
# Increase proxy timeouts for slow backends
nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
# Increase max request body size (default 1MB)
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
# Enable CORS
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://myapp.example.com"Fix 6: Fix TLS / HTTPS Configuration
If HTTPS is not working, check the TLS secret:
# Verify the TLS secret exists
kubectl get secret my-tls-secret -n my-namespace
# Check the secret has the right keys
kubectl describe secret my-tls-secret -n my-namespace
# Must contain: tls.crt and tls.keyCreate a TLS secret from cert files:
kubectl create secret tls my-tls-secret \
--cert=path/to/tls.crt \
--key=path/to/tls.key \
--namespace my-namespaceIngress with TLS:
spec:
ingressClassName: nginx
tls:
- hosts:
- myapp.example.com
secretName: my-tls-secret # Must exist in the same namespace
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80Use cert-manager for automatic TLS:
metadata:
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod" # cert-manager creates the secret automatically
spec:
tls:
- hosts:
- myapp.example.com
secretName: myapp-tls # cert-manager will create thisFix 7: Debug with kubectl and Ingress Controller Logs
Check Ingress status and events:
kubectl describe ingress my-ingress -n my-namespace
# Look for:
# - Address field (empty = controller not assigned it yet)
# - Events section (errors from the controller)Check ingress-nginx controller logs:
kubectl logs -n ingress-nginx \
$(kubectl get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}')
# Filter for your host
kubectl logs -n ingress-nginx \
$(kubectl get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') \
| grep "myapp.example.com"Check the nginx.conf generated by ingress-nginx:
kubectl exec -n ingress-nginx \
$(kubectl get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') \
-- cat /etc/nginx/nginx.conf | grep -A10 "myapp.example.com"Test connectivity from inside the cluster:
# Run a temporary pod to test internal routing
kubectl run curl-test --image=curlimages/curl --rm -it --restart=Never \
-- curl -v http://my-service.my-namespace.svc.cluster.local/
# Test through the ingress controller's ClusterIP
kubectl get service -n ingress-nginx
kubectl run curl-test --image=curlimages/curl --rm -it --restart=Never \
-- curl -v -H "Host: myapp.example.com" http://<ingress-controller-clusterip>/Still Not Working?
Check DNS points to the Ingress Controller’s external IP. If kubectl get ingress shows an ADDRESS, that IP must match your DNS:
kubectl get ingress my-ingress -n my-namespace
# NAME CLASS HOSTS ADDRESS PORTS
# my-ingress nginx myapp.example.com 1.2.3.4 80, 443
# Verify DNS
nslookup myapp.example.com
dig myapp.example.comCheck the LoadBalancer Service for the ingress controller:
kubectl get service -n ingress-nginx ingress-nginx-controller
# TYPE: LoadBalancer — must have an EXTERNAL-IP assigned
# If EXTERNAL-IP shows <pending>, the cloud provider has not assigned an IP yetVerify the Ingress is in the right namespace. An Ingress only routes to Services in the same namespace (by default). Cross-namespace routing requires ExternalName Services or a gateway API.
For related Kubernetes issues, see Fix: Kubernetes CrashLoopBackOff and Fix: kubectl Connection Refused.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Kubernetes ConfigMap Changes Not Reflected in Running Pods
How to fix Kubernetes ConfigMap updates not reaching running pods — why pods don't see updated values, how to trigger restarts, use live volume mounts, and automate ConfigMap rollouts with Reloader.
Fix: Certbot Certificate Renewal Failed (Let's Encrypt)
How to fix Certbot certificate renewal failures — domain validation errors, port 80 blocked, nginx config issues, permissions, and automating renewals with systemd or cron.
Fix: Kubernetes exceeded quota / Pod Stuck in Pending Due to Resource Quota
How to fix Kubernetes 'exceeded quota' errors — pods stuck in Pending because namespace resource quotas are exhausted, missing resource requests, and LimitRange defaults.
Fix: Nginx WebSocket Proxy Not Working (101 Switching Protocols Failed)
How to fix Nginx WebSocket proxying not working — 101 Switching Protocols fails, connections drop after 60 seconds, missing Upgrade headers, and SSL WebSocket configuration.