← Back to Blog

Troubleshooting Kong Gateway to Upstream Service Connectivity in Kubernetes

January 2025

When Kong Gateway can't reach your upstream services in Kubernetes, the error messages often don't point directly to the root cause. This guide provides a systematic approach to diagnosing these connectivity issues.

Common Symptoms

Before diving into troubleshooting, identify which symptom you're seeing:

  • 502 Bad Gateway - Kong reached the upstream but got an invalid response
  • 503 Service Unavailable - No healthy upstreams available
  • 504 Gateway Timeout - Upstream didn't respond in time
  • Connection refused - Nothing listening on the target port
  • DNS resolution failed - Service name can't be resolved

Step 1: Verify the Upstream Service Exists

First, confirm the target service exists and has endpoints:

# Check if the service exists
kubectl get svc my-service -n my-namespace

# Check if there are endpoints (pods backing the service)
kubectl get endpoints my-service -n my-namespace

# Describe for more details
kubectl describe svc my-service -n my-namespace

If endpoints shows <none>, the service selector doesn't match any pods, or the pods aren't ready.

Step 2: Test Connectivity from Kong's Pod

Shell into a Kong pod and test connectivity directly:

# Get a Kong pod name
KONG_POD=$(kubectl get pods -n kong -l app=kong -o jsonpath='{.items[0].metadata.name}')

# Shell into it
kubectl exec -it $KONG_POD -n kong -- /bin/sh

# Test DNS resolution
nslookup my-service.my-namespace.svc.cluster.local

# Test HTTP connectivity
curl -v http://my-service.my-namespace.svc.cluster.local:8080/health

If DNS resolution fails, check CoreDNS. If curl fails, continue to the next steps.

Step 3: Check Network Policies

NetworkPolicies can silently block traffic between namespaces:

# List network policies in the target namespace
kubectl get networkpolicies -n my-namespace

# Describe them to see ingress/egress rules
kubectl describe networkpolicy -n my-namespace

If policies exist, ensure they allow ingress from the Kong namespace. A permissive policy for Kong might look like:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-kong-ingress
  namespace: my-namespace
spec:
  podSelector: {}
  policyTypes:
    - Ingress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: kong

Step 4: Verify Kong's Service Configuration

Check how Kong has the upstream configured:

# If using Kong Ingress Controller, check the Ingress resource
kubectl get ingress my-ingress -n my-namespace -o yaml

# Check Kong's internal configuration via Admin API
kubectl exec -it $KONG_POD -n kong -- curl localhost:8001/services
kubectl exec -it $KONG_POD -n kong -- curl localhost:8001/upstreams
kubectl exec -it $KONG_POD -n kong -- curl localhost:8001/upstreams/my-upstream/targets

Common configuration issues:

  • Using my-service instead of my-service.my-namespace.svc.cluster.local
  • Wrong port number in the upstream configuration
  • Protocol mismatch (HTTP vs HTTPS)

Step 5: Check Pod Readiness

Services only route to pods that pass readiness probes:

# Check pod status
kubectl get pods -n my-namespace -l app=my-app

# Look for readiness probe failures
kubectl describe pod my-pod -n my-namespace | grep -A 10 "Readiness"

# Check pod logs for startup issues
kubectl logs my-pod -n my-namespace

Step 6: DNS and CoreDNS Troubleshooting

If DNS resolution is failing:

# Check CoreDNS pods are running
kubectl get pods -n kube-system -l k8s-app=kube-dns

# Check CoreDNS logs for errors
kubectl logs -n kube-system -l k8s-app=kube-dns

# Test DNS from a debug pod
kubectl run dns-test --image=busybox:1.36 --rm -it --restart=Never -- \
  nslookup my-service.my-namespace.svc.cluster.local

DNS search domains matter. Inside pods, you can usually use short names like my-service.my-namespace, but in some configurations you need the full .svc.cluster.local suffix.

Step 7: Check for Service Mesh Interference

If you're running Istio, Linkerd, or another service mesh, sidecar proxies can affect traffic:

# Check if pods have sidecar containers
kubectl get pods my-pod -n my-namespace -o jsonpath='{.spec.containers[*].name}'

# Check Istio VirtualService/DestinationRule if applicable
kubectl get virtualservice,destinationrule -n my-namespace

Service meshes may require specific configuration to allow Kong to communicate with mesh-enabled services.

Step 8: Port and Protocol Verification

Verify the service port configuration matches what the pod exposes:

# Check service port configuration
kubectl get svc my-service -n my-namespace -o yaml | grep -A 5 "ports:"

# Check what the container actually listens on
kubectl exec -it my-pod -n my-namespace -- netstat -tlnp
# or
kubectl exec -it my-pod -n my-namespace -- ss -tlnp

Common mismatches:

  • Service targetPort doesn't match container port
  • Service expects HTTP but container speaks HTTPS (or vice versa)
  • gRPC services need HTTP/2 configuration

Debugging Checklist

Quick checklist for Kong upstream connectivity issues:

  1. Service exists with correct selector: kubectl get svc,endpoints
  2. Pods are Running and Ready: kubectl get pods
  3. DNS resolves from Kong pod: nslookup service.namespace.svc
  4. No blocking NetworkPolicies: kubectl get networkpolicies
  5. Kong config uses correct host/port: Check Admin API
  6. Port matches between Service and Pod: Compare targetPort
  7. No service mesh interference: Check sidecars

Conclusion

Most Kong upstream connectivity issues in Kubernetes come down to DNS resolution, NetworkPolicies, or port/protocol mismatches. Following this systematic approach helps isolate the layer where the problem occurs, making it faster to identify and fix.

If you're still stuck after working through these steps, Kong's debug logging can provide additional insights:

# Enable debug logging temporarily
kubectl exec -it $KONG_POD -n kong -- \
  curl -X PATCH localhost:8001 -d log_level=debug

← Back to Blog