0

I installed fluxcd via terraform on my AKS Cluster.

If i rename a Service in the Flux Repo, it deployes the Service with the new name - but doesn't delete the old Service.

Is this normal or do i have to configure this?

My terraform Script to install fluxcd:

resource "azurerm_kubernetes_flux_configuration" "main" {
  name       = "flux"
  namespace  = "flux-system"
  cluster_id = var.aks_cluster_id
  scope      = "cluster"

git_repository { url = "https://dev.azure.com/***/_git/gitops" reference_type = "branch" reference_value = var.gitops_repo_branch_name https_user = var.username https_key_base64 = base64encode(var.personal_access_token) sync_interval_in_seconds = 60 timeout_in_seconds = 600 }

kustomizations { name = "kustomization" path = "clusters/${var.namespace}" retry_interval_in_seconds = 60 sync_interval_in_seconds = 600 timeout_in_seconds = 600 }

depends_on = [ azurerm_kubernetes_cluster_extension.flux ] }

1 Answers1

0

If you're renaming a resource in GitOps, terraform will see this as a "drifted" resource if you try and redeploy your terraform stack. Terraform does not automatically correct drifted resources. This isn't a problem because kustomization reconciliation is handled through flux internally.

Kustomizations in Flux can be finnicky when renaming resources. I'd start by running flux get all and making sure everything is functioning as expected. If so, I'd post a question on FluxCD's Github page (they usually get back within 24h).

Also make sure you have no finalizers configured on your service. If hooked up through azure, you may have a finalizer on the service that checks if the azure network device has been deleted (dont know much about azure, but this is the case in AWS when configuring a service as a load balancer). If your service is stuck in the "Terminating" state, this is likely the case. To see if there are finalizers on your service: kubectl edit service <service-name> -n namespace-name and look for a list of finalizers. If you delete the list in this screen, your service should terminate successfully

If you're still having issues, one possible solution would be to wrap your cluster configuration in a FluxCD HelmRelease. This way, when you rename a resource, you do so inside your helm release manifests (or the helm values). Helm will realize that the resources inside the release have been updated (once rebuilding the release), and it will delete and re-create resources as necessary.

iamPres
  • 1
  • 1