I have setup a small microk8s cluster, for now only consisting of 2 machines (1 master, 1 worker).
I can successfully ping both machines on their internal IP, but any pod running on the other cluster cannot be contacted via the Cluster IP. Consider the following pod in the State Running:
Name: whoami-b577bd888-lw465
Namespace: default
Priority: 0
Service Account: default
Node: rpi001/10.0.0.142
Start Time: Fri, 06 Sep 2024 22:11:31 +0200
Labels: app=whoami
pod-template-hash=b577bd888
Annotations: cni.projectcalico.org/containerID: 9e113afb4361315cbc6a3d31293c2267f4c633f4fee95314e0d98d3fda2d24fc
cni.projectcalico.org/podIP: 10.1.191.66/32
cni.projectcalico.org/podIPs: 10.1.191.66/32
Status: Running
IP: 10.1.191.66
IPs:
IP: 10.1.191.66
Controlled By: ReplicaSet/whoami-b577bd888
Containers:
whoami:
Container ID: containerd://3e1965112785da0989b7cf39f6c94cbde2228c508bb27cb6763af6763848e07b
Image: containous/whoami
Image ID: docker.io/containous/whoami@sha256:7d6a3c8f91470a23ef380320609ee6e69ac68d20bc804f3a1c6065fb56cfa34e
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 06 Sep 2024 22:11:56 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jskpc (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-jskpc:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Being on master:
main@localhost:~$ ping rpi001
PING rpi001.home (10.0.0.142) 56(84) bytes of data.
64 bytes from rpi001.home (10.0.0.142): icmp_seq=1 ttl=64 time=0.632 ms
64 bytes from rpi001.home (10.0.0.142): icmp_seq=2 ttl=64 time=0.537 ms
ping 10.1.191.66 does not deliver any results. Is this intentional? I added a worker node using the microk8s join command with the --worker flag.
main@localhost:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
localhost Ready <none> 42h v1.30.4
rpi001 Ready <none> 36h v1.30.4
rpi001@rpi001:~ $ microk8s status
This MicroK8s deployment is acting as a node in a cluster.
Please use the control plane node
The pod can be reached from the worker node though:
rpi001@rpi001:~ $ ping 10.1.191.66
PING 10.1.191.66 (10.1.191.66) 56(84) bytes of data.
64 bytes from 10.1.191.66: icmp_seq=1 ttl=64 time=0.277 ms
I followed this tutorial which does not mention any other steps. So the question is: is the separation of nodes by design? Would it be good practive to add some IP routing to enable this? If yes, how should this de done?
The reason for my question: if the cluster nodes cannot communicate with each others' pods, how should an ingress controller managing incoming traffic from outside the cluster work?