0

I'm facing a connectivity issue in my Kubernetes cluster involving NetworkPolicy. I have a frontend service (ssv-portal-service) trying to talk to a backend service (contract-voucher-service-service) via the ingress controller.

It works fine when I define the egress rule using a label selector to allow traffic to pods with app.kubernetes.io/name: ingress-nginx

However, when I try to replace that with an IP-based egress rule using the ingress controller's external IP (in ipBlock.cidr), the connection fails - it doesn't connect as I get a timeout.

  • My cluster is an AKS cluster and I am using Azure CNI.

  • And my cluster is a private cluster and I am using an Azure internal load balancer (with an IP of: 10.203.53.251

Frontend service's network policy:

  apiVersion: networking.k8s.io/v1
  kind: NetworkPolicy
  . . .
  spec:

podSelector:       matchLabels:         app: contract-voucher-service-service     policyTypes:     - Ingress     - Egress

egress:     - ports:       - port: 80         protocol: TCP       - port: 443         protocol: TCP       to:       - namespaceSelector:           matchLabels:             kubernetes.io/metadata.name: default         podSelector:           matchLabels:             app.kubernetes.io/name: ingress-nginx

ingress:     - from:       - namespaceSelector:           matchLabels:             kubernetes.io/metadata.name: default         podSelector:           matchLabels:             app.kubernetes.io/name: ingress-nginx       ports:       - port: 80         protocol: TCP       - port: 8080         protocol: TCP       - port: 443         protocol: TCP     - from:       - podSelector:           matchLabels:             app: ssv-portal-service       ports:       - port: 8080         protocol: TCP       - port: 1337         protocol: TCP

and Backend service's network policy:

  apiVersion: networking.k8s.io/v1
  kind: NetworkPolicy
  . . .
  spec:
    podSelector:
      matchLabels:
        app: ssv-portal-service
    policyTypes:
    - Ingress
    - Egress

egress:     - ports:       - port: 8080         protocol: TCP       - port: 1337         protocol: TCP       to:       - podSelector:           matchLabels:             app: contract-voucher-service-service     - ports:       - port: 80         protocol: TCP       - port: 443         protocol: TCP       to:       - namespaceSelector:           matchLabels:             kubernetes.io/metadata.name: default         podSelector:           matchLabels:             app.kubernetes.io/name: ingress-nginx     - ports:       - port: 53         protocol: UDP       to:       - namespaceSelector:           matchLabels:             kubernetes.io/metadata.name: kube-system         podSelector:           matchLabels:             k8s-app: kube-dns     ingress:     - from:       - namespaceSelector:           matchLabels:             kubernetes.io/metadata.name: default         podSelector:           matchLabels:             app.kubernetes.io/name: ingress-nginx       ports:       - port: 80         protocol: TCP       - port: 8080         protocol: TCP       - port: 443         protocol: TCP

above is working fine.

But instead of the label selectors for nginx, if I use the private LB IP as below, it doesn't work (frontend service cannot reach the backend

  apiVersion: networking.k8s.io/v1
  kind: NetworkPolicy
  . . .
  spec:

podSelector:       matchLabels:         app: contract-voucher-service-service     policyTypes:     - Ingress     - Egress

egress:     - ports:       - port: 80         protocol: TCP       - port: 443         protocol: TCP       to:       - ipBlock: cidr: 10.203.53.251/32 . . .

Is there a reason why traffic allowed via IP block fails, but works via podSelector with labels? Does Kubernetes treat ingress controller IPs differently in egress rules?

Any help understanding this behavior would be appreciated.

0 Answers0