0

Good afternoon, I ran into the problem that it is not possible to attach MetalLB to K3S I have a cluster with 3 masters and 3 workers

NAME            STATUS   ROLES                       AGE    VERSION
master-node-1   Ready    control-plane,etcd,master   4d2h   v1.30.5+k3s1
master-node-2   Ready    control-plane,etcd,master   4d1h   v1.31.1+k3s1
master-node-3   Ready    control-plane,etcd,master   4d1h   v1.31.1+k3s1
worker-node-1   Ready    <none>                      4d1h   v1.31.1+k3s1
worker-node-2   Ready    <none>                      4d     v1.31.1+k3s1
worker-node-3   Ready    <none>                      4d     v1.31.1+k3s1

initially, I tried to install MetalLB in this way

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml

config

# config.yml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.0.170-192.168.0.180

with this approach Load Balancer was working, it was not stable on the tap, with a 50% probability it was impossible to enter the site. Then I found out that this is an outdated approach. I tried a new one

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.5/config/manifests/metallb-native.yaml

config

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: k3s-lb-pool
  namespace: metallb
spec:
  addresses:
  - 192.168.0.170-192.168.0.180
  autoAssign: true
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: k3s-lb-pool
  namespace: metallb

but I get an error

Error from server (InternalError): error when creating "metalLB-config.yaml": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://metallb-webhook-service.metallb.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": no endpoints available for service "metallb-webhook-service"
Error from server (InternalError): error when creating "metalLB-config.yaml": Internal error occurred: failed calling webhook "l2advertisementvalidationwebhook.metallb.io": failed to call webhook: Post "https://metallb-webhook-service.metallb.svc:443/validate-metallb-io-v1beta1-l2advertisement?timeout=10s": no endpoints available for service "metallb-webhook-service"

but the namespace is still working properly

root@master-node-1:~# kubectl get pod -n=metallb-system
NAME                          READY   STATUS    RESTARTS        AGE
controller-86f5578878-9ddsr   1/1     Running   2 (3m31s ago)   4m57s
speaker-5j47x                 1/1     Running   0               4m56s
speaker-6drpd                 1/1     Running   0               4m56s
speaker-ccpmq                 1/1     Running   0               4m56s
speaker-jfpch                 1/1     Running   1 (3m12s ago)   4m56s
speaker-mps8b                 1/1     Running   0               4m57s
speaker-t9lx2                 1/1     Running   0               4m56s

(I apologize in advance English is not my native language)

2 Answers2

0

Welcome to ServerFault

By default, k3s already includes a LoadBalancer called ServiceLB (previously called Klipper LoadBalacer). I suspect both were running simultaneously which is known to cause problems: One of them beeing that your services of type LoadBalancer can only be reached sporadically. This is due to both ServiceLB and MetalLB using L2 advertisements under the hood by default.

Because of this you will need to disable ServiceLB if you want to use MetalLB in combination with k3s (as described in their documentation). If your cluster is already running, you can add the --disable servicelb flag to the systemd unit file located at /etc/systemd/system/k3s.service like this:

...
ExecStart=/usr/local/bin/k3s \
server \
    '--server' \
    '...' \
    '--disable' \
    'servicelb' \
    '--disable' \ <-- You can use multiple disable flags.
    'traefik' \
    '...' \
    'node-role.kubernetes.io/master=true:NoSchedule' \

The second problem you've described is a bit harder to debug for me. I suspect you did not wait long enough before trying to create the IPAddressPool and L2Advertisement but thats just a guess based on the fact that your metallb-webhook-service does not seem to have any endpoints. Maby you can share some logs of the MetalLB controller pod? But I would try disabling ServiceLB and then reinstalling MetalLB first.

Eleasar
  • 388
0

finally solved this problem. first, we install metallb itself (be careful, depending on the version, the config file will change)

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.5/config/manifests/metallb-native.yaml

then we run the list of ip addresses

pool-ip.yml

apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: pool-ip namespace: metallb-system spec: addresses:

  • 192.168.0.170-192.168.0.200

kubectl -n metallb-system apply -f pool-ip.yml

then we definitely check if the list of ip addresses has been created

kubectl -n metallb-system get IPAddressPool

if a similar error occurs

Error from server (InternalError): error when creating "metalLB-config.yaml": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://metallb-webhook-service.metallb.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": no endpoints available for service "metallb-webhook-service"

then runs this command, and after the error there will be no

kubectl delete validatingwebhookconfigurations metallb-webhook-configuration

after that, it launches something like a provider to these ip addresses

l2-advertisement.yml

apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: k3s-lb-pool namespace: metallb-system spec: ipAddressPools:

  • pool-ip

kubectl apply -f l2-advertisement.yml

after that, we check whether he has sucked the ip addresses

kubectl get l2advertisement -A

After all this, the Load Balancer should be available (once again, I remind you that English is not my native language, so if something is unclear, I apologize)