0

I deployed minio via kubernetes but some error for the minio when i create a tenant.

I checked the logs, it seems like this

INFO: Unable to use the drive http://storage-pool-0-0.storage-hl.minio.svc.cluster.local:9000/export: drive not found, will be retried
INFO: Unable to use the drive http://storage-pool-0-1.storage-hl.minio.svc.cluster.local:9000/export: drive not found, will be retried
INFO: Unable to use the drive http://storage-pool-0-2.storage-hl.minio.svc.cluster.local:9000/export: drive not found, will be retried
INFO: Unable to use the drive http://storage-pool-0-3.storage-hl.minio.svc.cluster.local:9000/export: drive not found, will be retried
INFO: Waiting for a minimum of 2 drives to come online (elapsed 23m13s)

All logs is this, and i try to execuate kubectl exec -it storage-pool-0-3 -n minio -- curl -I http://storage-pool-0-0.storage-hl.minio.svc.cluster.local:9000.

I got such as

Defaulted container "minio" out of: minio, sidecar, validate-arguments (init)
HTTP/1.1 400 Bad Request
Accept-Ranges: bytes
Content-Length: 225
Content-Type: application/xml
Server: MinIO
Vary: Origin
Date: Sun, 21 Jul 2024 23:13:57 GMT

What should i do to fix that problem to use minio normally?

My kubernetes status is

root@dev-master-1:~/minio/mc# kubectl get all -n minio
NAME                   READY   STATUS    RESTARTS   AGE
pod/storage-pool-0-0   2/2     Running   0          32m
pod/storage-pool-0-1   2/2     Running   0          32m
pod/storage-pool-0-2   2/2     Running   0          32m
pod/storage-pool-0-3   2/2     Running   0          32m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/minio LoadBalancer 10.101.4.180 192.168.3.240 80:32676/TCP 32m service/storage-console LoadBalancer 10.103.42.214 192.168.3.241 9090:31544/TCP 32m service/storage-hl ClusterIP None <none> 9000/TCP 32m

NAME READY AGE statefulset.apps/storage-pool-0 4/4 32m

And my cni is cilium, LoadBalance is Metallb

1 Answers1

0

I encountered the same problem

MiniO couldn't create files, if you look into it in detail, the capacity ran out of inodes

Solution: increase the disks to add inodes

kex minio-1 -- bash
bash-5.1$ df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay          63G   47G   14G  78% /
tmpfs            64M     0   64M   0% /dev
tmpfs           7.9G     0  7.9G   0% /sys/fs/cgroup
/dev/vdb        4.9G  2.1G  2.8G  44% /export          <===============
/dev/vda2        63G   47G   14G  78% /data
shm              64M     0   64M   0% /dev/shm
tmpfs            13G   12K   13G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs           7.9G     0  7.9G   0% /proc/acpi
tmpfs           7.9G     0  7.9G   0% /proc/scsi
tmpfs           7.9G     0  7.9G   0% /sys/firmware
bash-5.1$ df -i
Filesystem      Inodes   IUsed   IFree IUse% Mounted on
overlay        4194304 1150483 3043821   28% /
tmpfs          2047670      17 2047653    1% /dev
tmpfs          2047670      17 2047653    1% /sys/fs/cgroup
/dev/vdb        327680  327680       0  100% /export <===============
/dev/vda2      4194304 1150483 3043821   28% /data
shm            2047670       1 2047669    1% /dev/shm
tmpfs          2047670       9 2047661    1% /run/secrets/kubernetes.io/serviceaccount
tmpfs          2047670       1 2047669    1% /proc/acpi
tmpfs          2047670       1 2047669    1% /proc/scsi
tmpfs          2047670       1 2047669    1% /sys/firmware