2

I have set up a kubernetes 1.30.3 cluster of one controlplane (192.168.0.14) and one worker (192.168.0.15) node, running Talos 1.7.6 in VMs using KVM on TrueNAS SCALE.

Trying to use truecharts to set up a few basic services via helm. First, MetalLB. I have successfully configured it to use an IP pool from 192.168.0.16-192.168.0.30.

Now, as I add a service, in this case adminer, and configure it to be of service type LoadBalancer, it gets the IP 192.168.0.16 and listens to port 18080. However, trying to connect to that port just times out.

My values.yaml for metallb-config is

ipAddressPools:
  - name: apps
    autoAssign: true
    avoidBuggyIPs: false
    addresses:
      - 192.168.0.16-192.168.0.30
L2Advertisements:
  - name: l2adv
    addressPools:
      - apps

and for adminer:

service:
  main:
    type: LoadBalancer

kubectl describe svc adminer:

Name:                     adminer
Namespace:                default
Labels:                   app=adminer-10.1.4
                          app.kubernetes.io/instance=adminer
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=adminer
                          app.kubernetes.io/version=latest
                          helm-revision=1
                          helm.sh/chart=adminer-10.1.4
                          release=adminer
                          service.name=main
Annotations:              meta.helm.sh/release-name: adminer
                          meta.helm.sh/release-namespace: default
                          metallb.universe.tf/allow-shared-ip: adminer
                          metallb.universe.tf/ip-allocated-from-pool: apps
Selector:                 app.kubernetes.io/instance=adminer,app.kubernetes.io/name=adminer,pod.name=main
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.233.191
IPs:                      10.100.233.191
LoadBalancer Ingress:     192.168.0.16
Port:                     main  18080/TCP
TargetPort:               8080/TCP
Endpoints:
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason       Age   From                Message
  ----    ------       ----  ----                -------
  Normal  IPAllocated  9s    metallb-controller  Assigned IP ["192.168.0.16"]

nmap -Pn -p 18080 192.168.0.16:

Starting Nmap 7.95 ( https://nmap.org ) at 2024-08-22 11:04 CEST
Nmap scan report for 192.168.0.16
Host is up.

PORT STATE SERVICE 18080/tcp filtered unknown

Nmap done: 1 IP address (1 host up) scanned in 2.03 seconds

What piece of the puzzle am I missing? I have tried different services with the same result. I can't connect to the ports the services open. I am on the same subnet as the cluster and the LB IP pool (which is outside the DHCP range).

For the sake of completeness, kubectl get all -A:

NAMESPACE     NAME                                        READY   STATUS      RESTARTS      AGE
default       pod/adminer-86d856f947-c94x8                1/1     Running     0             6m52s
kube-system   pod/coredns-64b67fc8fd-j4nm8                1/1     Running     0             42h
kube-system   pod/coredns-64b67fc8fd-rllsg                1/1     Running     0             42h
kube-system   pod/coredns-64b67fc8fd-rp74n                0/1     Completed   0             42h
kube-system   pod/coredns-64b67fc8fd-xfg48                0/1     Completed   0             42h
kube-system   pod/kube-apiserver-talos-pwj-68i            1/1     Running     0             42h
kube-system   pod/kube-controller-manager-talos-pwj-68i   1/1     Running     1 (42h ago)   42h
kube-system   pod/kube-flannel-bq8pj                      1/1     Running     0             42h
kube-system   pod/kube-flannel-zvw66                      1/1     Running     1 (42h ago)   42h
kube-system   pod/kube-proxy-5znrp                        1/1     Running     1 (42h ago)   42h
kube-system   pod/kube-proxy-bsqr6                        1/1     Running     0             42h
kube-system   pod/kube-scheduler-talos-pwj-68i            1/1     Running     1 (42h ago)   42h
metallb       pod/metallb-controller-66fddf5ff-9jd79      1/1     Running     0             7m23s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/adminer LoadBalancer 10.100.233.191 192.168.0.16 18080/TCP 6m52s default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 42h kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 42h metallb service/metallb-webhook-service ClusterIP 10.98.180.212 <none> 443/TCP 7m23s

NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system daemonset.apps/kube-flannel 2 2 2 2 2 <none> 42h kube-system daemonset.apps/kube-proxy 2 2 2 2 2 <none> 42h metallb daemonset.apps/metallb-speaker 2 0 0 0 0 kubernetes.io/os=linux 7m23s

NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE default deployment.apps/adminer 1/1 1 1 6m52s kube-system deployment.apps/coredns 2/2 2 2 42h metallb deployment.apps/metallb-controller 1/1 1 1 7m23s

NAMESPACE NAME DESIRED CURRENT READY AGE default replicaset.apps/adminer-86d856f947 1 1 1 6m52s kube-system replicaset.apps/coredns-64b67fc8fd 2 2 2 42h metallb replicaset.apps/metallb-controller-66fddf5ff 1 1 1 7m23s

Johan
  • 786
  • 1
  • 5
  • 21

1 Answers1

1

I had the same problem when installing Kubernetes v1.31.2—the MetalLB speaker DaemonSet couldn’t scale itself. It’s helpful to describe the DaemonSet to diagnose the issue. In my case, the problem was due to the Pod Security Admission Controller enforcing policies that prevented the DaemonSet from creating pods with the necessary privileges. Setting the following label on the MetalLB namespace resolved the issue by allowing privileged pods:

kubectl label namespace metallb pod-security.kubernetes.io/enforce=privileged --overwrite