5

Using helm for deploying chart on my Kubernetes cluster, since one day, I can't deploy a new one or upgrading one existed.

Indeed, each time I am using helm I have an error message telling me that it is not possible to install or upgrade ressources.

If I run helm install --name foo . -f values.yaml --namespace foo-namespace I have this output:

Error: release foo failed: the server could not find the requested resource

If I run helm upgrade --install foo . -f values.yaml --namespace foo-namespace or helm upgrade foo . -f values.yaml --namespace foo-namespace I have this error:

Error: UPGRADE FAILED: "foo" has no deployed releases

I don't really understand why.

This is my helm version:

Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}

On my kubernetes cluster I have tiller deployed with the same version, when I run kubectl describe pods tiller-deploy-84b... -n kube-system:

Name:               tiller-deploy-84b8...
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               k8s-worker-1/167.114.249.216
Start Time:         Tue, 26 Feb 2019 10:50:21 +0100
Labels:             app=helm
                    name=tiller
                    pod-template-hash=84b...
Annotations:        <none>
Status:             Running
IP:                 <IP_NUMBER>
Controlled By:      ReplicaSet/tiller-deploy-84b8...
Containers:
  tiller:
    Container ID:   docker://0302f9957d5d83db22...
    Image:          gcr.io/kubernetes-helm/tiller:v2.12.3
    Image ID:       docker-pullable://gcr.io/kubernetes-helm/tiller@sha256:cab750b402d24d...
    Ports:          44134/TCP, 44135/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Tue, 26 Feb 2019 10:50:28 +0100
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
    Environment:
      TILLER_NAMESPACE:    kube-system
      TILLER_HISTORY_MAX:  0
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from helm-token-... (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  helm-token-...:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  helm-token-...
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                   Message
  ----    ------     ----  ----                   -------
  Normal  Scheduled  26m   default-scheduler      Successfully assigned kube-system/tiller-deploy-84b86cbc59-kxjqv to worker-1
  Normal  Pulling    26m   kubelet, k8s-worker-1  pulling image "gcr.io/kubernetes-helm/tiller:v2.12.3"
  Normal  Pulled     26m   kubelet, k8s-worker-1  Successfully pulled image "gcr.io/kubernetes-helm/tiller:v2.12.3"
  Normal  Created    26m   kubelet, k8s-worker-1  Created container
  Normal  Started    26m   kubelet, k8s-worker-1  Started container

UPDATE

Following answer suggestion, I have run these two commands helm upgrade foo . -f values.yaml --namespace foo-namespace --force | helm upgrade --install foo . -f values.yaml --namespace foo-namespace --force and I still get an error:

UPGRADE FAILED
ROLLING BACK
Error: failed to create resource: the server could not find the requested resource
Error: UPGRADE FAILED: failed to create resource: the server could not find the requested resource

Notice that foo-namespace already exist. So the error don't come from the namespace name. Indeed, if I run helm list, I can see that the foo chart is in a FAILED status.

Is someone have faced the same issue ?

french_dev
  • 171
  • 1
  • 2
  • 7

2 Answers2

3

Yes this happens frequently when debugging helm releases. The problem happens when a previously failed release is preventing you from updating it.

If you run helm ls you should see a release in state FAILED. You might have deleted it in which case it might show up with helm ls -a. Such a release cannot be upgraded using the normal approach of having helm compare the new yaml to the old yaml to detect what objects to change as it is a failed release.

As this normally happens when trying to get something new running I typically helm delete —purge the failed release. That is slightly drastic though so you may want to try to do the upgrade adding —force

https://github.com/helm/helm/pull/2280

simbo1905
  • 1,172
  • 5
  • 14
1

This can happen with releases in FAILED state as mentioned by simbo1905 in her/his answer.

Another case is if there's a previously deleted but not purged release with the same name.

Doing another delete with the purge option on the release will free the name for reuse.

helm ls -a
helm ls -a | grep -e NAME -e name_of_release
helm delete --purge name_of_release

Note: This issue https://github.com/helm/helm/issues/972 suggests that there is a parameter to force reusing the same name, however it also suggests to not use in production (i.e. do not automate with replace unless you know what you're doing).

> helm install --help
[...]
    --replace    re-use the given name, even if that name is already used. This is unsafe in production
[...]
Thomas BDX
  • 113
  • 5