0

I have k8s test cluster with 1 master and 2 worker nodes. I created a PersistentVolumeClaim resource using the Hetzner Cloud csi driver. This driver only allows you to create ReadWriteOnce volumes.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: metadata
  namespace: development
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: hcloud-volumes

I also created a Deployment resource with one Pod that uses that volume.

apiVersion: apps/v1
kind: Deployment
...
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app1
  template:
    metadata:
      labels:
        app: app1
    spec:
      containers:
        - name: app1
          image: some-image
          volumeMounts:
            - name: metadata
              mountPath: /mnt/metadata
  volumes:
    - name: metadata
      persistentVolumeClaim:
        claimName: metadata

Overall everything works great. The pod was tied to a specific node to which the volume was assigned (for example worker-2). The problems start when I turn off worker-2. Deployment resources tries to recreate the pod on another node (worker-1), but gets stuck in deadlock, since the PersistentVolumeClaim is busy with the old pod. Specific error: Multi-Attach error for volume "pvc-f587e0c2-25f1-4792-815e" Volume is already used by pod(s) app1-5f9c445458

Is there any way I can make a volume automatically unmount from a pod using this volume type? I would like to have a deployment resource that can easily migrate a pod from one node to another, along with its associated volumes.

1 Answers1

0

By default, a Deployment uses a RollingUpdate strategy, which means new pods spin up before old pods are terminated. This causes problems with RWO volumes, resulting in the error you have reported in your question.

You can modify this behavior by setting spec.strategy.type to Recreate. In this case, old pods are terminated first, before spawning the new pods. This will free up the PVC so that it can bind on a new node.

See here for the official documentation.

larsks
  • 823
  • 5
  • 8