1

I'm trying to setup distributed raid 1+0 storage on my k3s cluster running of 5 raspberry pi 4s running armbian (jammy). Currently I'm seeing the event waiting for a volume to be created, either by external provisioner "openebs.io/provisioner-raw-block" or manually created by system administrator on my test PVC and it has been stuck provisioning in a pending state for over 24 hours now. It seems like the worker was asked to provision... but it isn't? I'm not sure why not.

My master node is atlas00 and the workers are 02, 04, 05, 06, & 08. I have a hdd attached to each worker mounted at /mnt/eos<worker #/2 -1> (so atlas02 has a mounted drive at /mnt/eos0 and atlas04 has a mounted drive at /mnt/eos1 and so on). Below is my PVC and SC description and list of pods for OpenEBS and my nodes. I'm not sure what all would be helpful here to troubleshoot this even. If you have any insight into what might be going on, I'd appreciate your wisdom - thanks!

root@atlas00:~# kubectl describe pvc eos-pvc-test
Name:          eos-pvc-test
Namespace:     default
StorageClass:  eos-raid10
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: openebs.io/provisioner-raw-block
               volume.kubernetes.io/storage-provisioner: openebs.io/provisioner-raw-block
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type    Reason                Age                     From                         Message
  ----    ------                ----                    ----                         -------
  Normal  ExternalProvisioning  3m16s (x5983 over 24h)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "openebs.io/provisioner-raw-block" or manually created by system administrator
root@atlas00:~# kubectl describe sc eos-raid10
Name:            eos-raid10
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"eos-raid10"},"parameters":{"cas.openebs.io/config":"- name: StorageType\n  value: \"raid-10\"\n- name: ReplicaAutoCreation\n  value: \"true\"\n","openebs.io/block-device":"[\n  {\"nodeID\": \"atlas01\", \"devicePath\": \"/dev/sda\"},\n  {\"nodeID\": \"atlas02\", \"devicePath\": \"/dev/sda\"},\n  {\"nodeID\": \"atlas03\", \"devicePath\": \"/dev/sda\"},\n  {\"nodeID\": \"atlas04\", \"devicePath\": \"/dev/sda\"}\n]\n","openebs.io/capacity":"10G","openebs.io/cstor-target-ips":"192.168.42.52,192.168.42.54,192.168.42.56,192.168.42.58","openebs.io/cstor-target-path":"/mnt/eos0,/mnt/eos1,/mnt/eos2,/mnt/eos3","openebs.io/jiva-replica-count":"4","openebs.io/jiva-replica-image":"quay.io/openebs/jiva:0.8.0","openebs.io/readwrite":"rw","openebs.io/storage-engine":"jiva","openebs.io/storage-pool":"default","openebs.io/volume-monitor":"true","targetPortal":"192.168.42.52:3260,192.168.42.54:3260,192.168.42.56:3260,192.168.42.58:3260"},"provisioner":"openebs.io/provisioner-raw-block"}

Provisioner: openebs.io/provisioner-raw-block Parameters: cas.openebs.io/config=- name: StorageType value: "raid-10"

  • name: ReplicaAutoCreation value: "true"

,openebs.io/block-device=[ {"nodeID": "atlas01", "devicePath": "/dev/sda"}, {"nodeID": "atlas02", "devicePath": "/dev/sda"}, {"nodeID": "atlas03", "devicePath": "/dev/sda"}, {"nodeID": "atlas04", "devicePath": "/dev/sda"} ] ,openebs.io/capacity=10G,openebs.io/cstor-target-ips=192.168.42.52,192.168.42.54,192.168.42.56,192.168.42.58,openebs.io/cstor-target-path=/mnt/eos0,/mnt/eos1,/mnt/eos2,/mnt/eos3,openebs.io/jiva-replica-count=4,openebs.io/jiva-replica-image=quay.io/openebs/jiva:0.8.0,openebs.io/readwrite=rw,openebs.io/storage-engine=jiva,openebs.io/storage-pool=default,openebs.io/volume-monitor=true,targetPortal=192.168.42.52:3260,192.168.42.54:3260,192.168.42.56:3260,192.168.42.58:3260 AllowVolumeExpansion: <unset> MountOptions: <none> ReclaimPolicy: Delete VolumeBindingMode: Immediate Events: <none> root@atlas00:~# kubectl get pods -n openebs NAME READY STATUS RESTARTS AGE openebs-ndm-z9vqb 1/1 Running 0 25h openebs-cstor-cspc-operator-86b95cb7b-9gnst 1/1 Running 0 25h openebs-ndm-ss88z 1/1 Running 0 25h openebs-ndm-6vhwk 1/1 Running 0 25h openebs-ndm-operator-698457995-ppp78 1/1 Running 0 25h openebs-cstor-cvc-operator-bf7b8d559-lhl8c 1/1 Running 0 25h openebs-ndm-5kjtk 1/1 Running 0 25h openebs-cstor-csi-node-dwsgj 2/2 Running 0 25h openebs-cstor-csi-node-r8tc4 2/2 Running 0 25h openebs-ndm-x7cbh 1/1 Running 0 25h openebs-cstor-csi-node-j59q9 2/2 Running 0 25h openebs-cstor-csi-node-xv9th 2/2 Running 0 25h openebs-cstor-admission-server-7d89bb76bf-bmd2m 1/1 Running 0 25h openebs-cstor-csi-node-xt9jv 2/2 Running 0 25h openebs-localpv-provisioner-7576d8cbcf-6nbl8 1/1 Running 14 (18m ago) 25h openebs-cstor-csi-controller-0 6/6 Running 12 (18m ago) 25h root@atlas00:~# kubectl get nodes NAME STATUS ROLES AGE VERSION atlas08 Ready worker 26h v1.26.3+k3s1 atlas06 Ready worker 26h v1.26.3+k3s1 atlas04 Ready worker 26h v1.26.3+k3s1 atlas00 Ready control-plane,master 26h v1.26.3+k3s1 atlas02 Ready worker 26h v1.26.3+k3s1 root@atlas00:~# lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 22.04.2 LTS Release: 22.04 Codename: jammy

Bruce Becker
  • 3,783
  • 4
  • 20
  • 41
Snorghma
  • 21
  • 3

1 Answers1

1

If users have a strict firewall setup on their Kubernetes nodes, the provisioning of a PV from a storageclass backed by a cStor storage pool may fail. The pool can be created without any issue and even the storage class is created, but the PVs may stay in pending state indefinitely. you may need to check the openebs-provisioner log.

Workaround:

This issue has currently only been observed, if the underlying node uses a network bridge and if the setting net.bridge.bridge-nf-call-iptables=1 in the /etc/sysctl.conf is present. The aforementioned setting is required in some Kubernetes installations, such as the Rancher Kubernetes Engine (RKE).

To avoid this issue, open the port 5656/tcp on the nodes that run the OpenEBS API pod. Alternatively, removing the network bridge might work.

NB: this solution is from the docs and the link is:https://openebs.io/docs/main/troubleshooting/volume-provisioning#persistent-volumes-indefinitely-remain-in-pending-state

Nonso
  • 11
  • 1