0

I have an Azure Kubernetes Service (AKS) cluster that I built using Terraform. As part of the provisioning of the cluster, I also created the App Gateway as part of and integrated with the cluster.

resource "azurerm_kubernetes_cluster" "aks" {
  name                                           = var.aks_cluster_name
  resource_group_name            = var.aks_rg
  location                                       = var.aks_location
  dns_prefix                                     = var.dns_prefix
  automatic_channel_upgrade = var.upgrade_channel
  private_dns_zone_id                    = data.azurerm_private_dns_zone.akszone.id
  private_cluster_enabled                = true
  oidc_issuer_enabled                    = true
  workload_identity_enabled     = true
  local_account_disabled          = true
  azure_policy_enabled                   = true

ingress_application_gateway { gateway_name = "${var.aks_cluster_name}-agw" subnet_id = data.azurerm_subnet.AppGw.id } . .. ... }

The Azure subscriptions and vnet design follow the hub-and-spoke model with a user-defined route (UDR) to redirect traffic to the virtual network appliances (NAV) in the HUB subscription. The AKS resides in the DEV subscription and the AKS network environment is configured to use Azure CNI (not CNI Overlay). The App Gateway is configured with both a public and private IP address. Here is my YAML:

For HTTP:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo3
  namespace: dev-demos
spec:
  replicas: 1
  selector:
    matchLabels:
      app: demo3
  template:
    metadata:
      labels:
        app: demo3
    spec:
      containers:
      - name: demo3
        image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
        ports:
        - containerPort: 80
        env:
        - name: TITLE
          value: "Welcome to Azure Kubernetes Service (AKS) - Demo3"
        resources:
          requests:
            memory: "128Mi"
            cpu: "250m"
          limits:
            memory: "256Mi"
            cpu: "500m"

apiVersion: v1 kind: Service metadata: name: demo3 namespace: dev-demos spec: type: ClusterIP ports:

  • port: 80

selector: app: demo3


apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress namespace: dev-demos annotations: appgw.ingress.kubernetes.io/use-private-ip: "false" spec: ingressClassName: azure-application-gateway rules:

  • host: testhost.contoso.com http: paths:
    • path: /demo3 pathType: Exact backend: service: name: demo3 port: number: 80

This works when a client request is sent to http://testhost.contoso.com. I have another similar YAML to enable TLS over the public IP and that also works with no issue.

When I flipped the annotation to use-private-ip: "true" and attempted to browse the service from an internal network, I couldn’t get any response from the App Gateway at all at the TCP level. When viewed using network trace, I can only see the client requests and no response from the App Gateway to the request. I first thought perhaps it was the UDR that redirected my traffic to the NVA, but then I realized that was not the case because the UDR is not associated with the AppGateway subnet. On the Network Security Rule (NSG) front, I have 2 TCP-allowed rules configured to allow inbound from ANY to the PUBLIC and PRIVATE IP of the App Gateway.

What am I missing?

We've been on the line with Microsoft Support for three weeks now, used LLMs, and talked to everyone we can in our organization for help but are unable to resolve this problem. Any suggestion will be helpful.

0 Answers0