0

I'm trying to set up a RKE2 cluster for tests withe one server and one worker. I have two VM spawned with virtual box both with a host only adapter. Their IPs on this network are

192.168.56.6 server
192.168.56.8 worker

They can ping each other without issue

Then I have installed rke2 on the server with the following /etc/rancher/rke2/config.yaml file

advertise-address: 192.168.56.6
node-ip: 192.168.56.6

and for the worker

server: https://192.168.56.6:9345
token: <the token>
node-ip: 192.168.56.8

Then i have deployed two pods to check that it was all fine with kubectl create deployment website --replicas=2 --image=httpd

Here is what I got with kubectl get pods -o wide

NAME                       READY   STATUS    RESTARTS      AGE   IP           NODE         NOMINATED NODE   READINESS GATES
website-64c99996dd-7hqbx   1/1     Running   0             65m   10.42.1.6    k8s-node1    <none>           <none>
website-64c99996dd-d5jhh   1/1     Running   0             65m   10.42.0.14   k8s-server   <none>           <none>

When I'm on server I can curl 10.42.0.14 but not 10.42.1.6, this latter address hangs without ever answering.

So I tried some traceroute with the options -I and -T but I get only ***.

I guess I have a network issue somewhere but I can't figure it. Let's focus on the server node. I has many interfaces:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:4f:d8:f1 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic enp0s3
       valid_lft 79108sec preferred_lft 79108sec
    inet6 fd00::a00:27ff:fe4f:d8f1/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 86328sec preferred_lft 14328sec
    inet6 fe80::a00:27ff:fe4f:d8f1/64 scope link 
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:bf:92:a8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.6/24 metric 100 brd 192.168.56.255 scope global dynamic enp0s8
       valid_lft 510sec preferred_lft 510sec
    inet6 fe80::a00:27ff:febf:92a8/64 scope link 
       valid_lft forever preferred_lft forever
4: cali33641a78019@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-fa91ec46-db7e-54e7-9266-298391d2b780
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
6: cali8c6403bf9e0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-4a046156-52f9-ccdf-5d21-392520cef693
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
13: calid84d9ea07f9@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-7d49ebbf-f7ca-3944-dd29-b4fc18a694ec
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
14: cali63f63f0f177@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-d5abc0f8-8d28-332c-38bf-33bb14239204
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
16: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether fa:60:07:06:6a:41 brd ff:ff:ff:ff:ff:ff
    inet 10.42.0.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::f860:7ff:fe06:6a41/64 scope link 
       valid_lft forever preferred_lft forever
17: calie5ddd4d7c4f@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-2a09b275-1e3d-4081-8a3b-6927149f24e6
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
19: caliabb41827da1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-85dba725-8372-2bf0-472b-85f04172234d
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever

The routes are

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         _gateway        0.0.0.0         UG    100    0        0 enp0s3
10.0.2.0        0.0.0.0         255.255.255.0   U     100    0        0 enp0s3
_gateway        0.0.0.0         255.255.255.255 UH    100    0        0 enp0s3
10.0.2.3        0.0.0.0         255.255.255.255 UH    100    0        0 enp0s3
10.42.0.3       0.0.0.0         255.255.255.255 UH    0      0        0 cali33641a78019
10.42.0.4       0.0.0.0         255.255.255.255 UH    0      0        0 cali8c6403bf9e0
10.42.0.9       0.0.0.0         255.255.255.255 UH    0      0        0 calid84d9ea07f9
10.42.0.10      0.0.0.0         255.255.255.255 UH    0      0        0 cali63f63f0f177
10.42.0.12      0.0.0.0         255.255.255.255 UH    0      0        0 calie5ddd4d7c4f
10.42.0.14      0.0.0.0         255.255.255.255 UH    0      0        0 caliabb41827da1
10.42.1.0       10.42.1.0       255.255.255.0   UG    0      0        0 flannel.1
192.168.56.0    0.0.0.0         255.255.255.0   U     100    0        0 enp0s8

and ip r returns

default via 10.0.2.2 dev enp0s3 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev enp0s3 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev enp0s3 proto dhcp scope link src 10.0.2.15 metric 100 
10.42.0.3 dev cali33641a78019 scope link 
10.42.0.4 dev cali8c6403bf9e0 scope link 
10.42.0.9 dev calid84d9ea07f9 scope link 
10.42.0.10 dev cali63f63f0f177 scope link 
10.42.0.12 dev calie5ddd4d7c4f scope link 
10.42.0.14 dev caliabb41827da1 scope link 
10.42.1.0/24 via 10.42.1.0 dev flannel.1 onlink 
192.168.56.0/24 dev enp0s8 proto kernel scope link src 192.168.56.6 metric 100

I don't see any issues here. There is also no errors in the canal pods, here are the ips of theses folks :

NAMESPACE     NAME                                                    READY   STATUS      RESTARTS      AGE    IP             NODE         NOMINATED NODE   READINESS GATES
default       website-64c99996dd-7hqbx                                1/1     Running     0             118m   10.42.1.6      k8s-node1    <none>           <none>
default       website-64c99996dd-d5jhh                                1/1     Running     0             118m   10.42.0.14     k8s-server   <none>           <none>
kube-system   cloud-controller-manager-k8s-server                     1/1     Running     0             154m   192.168.56.6   k8s-server   <none>           <none>
kube-system   etcd-k8s-server                                         1/1     Running     0             153m   192.168.56.6   k8s-server   <none>           <none>
kube-system   helm-install-rke2-canal-q67nd                           0/1     Completed   0             154m   192.168.56.6   k8s-server   <none>           <none>
kube-system   helm-install-rke2-coredns-vfcmv                         0/1     Completed   0             153m   192.168.56.6   k8s-server   <none>           <none>
kube-system   helm-install-rke2-ingress-nginx-rngb5                   0/1     Completed   0             153m   10.42.0.8      k8s-server   <none>           <none>
kube-system   helm-install-rke2-metrics-server-spt2c                  0/1     Completed   0             153m   10.42.0.7      k8s-server   <none>           <none>
kube-system   helm-install-rke2-runtimeclasses-s68mv                  0/1     Completed   0             153m   10.42.0.6      k8s-server   <none>           <none>
kube-system   helm-install-rke2-snapshot-controller-7zjcg             0/1     Completed   0             153m   10.42.0.5      k8s-server   <none>           <none>
kube-system   helm-install-rke2-snapshot-controller-crd-q4s72         0/1     Completed   0             153m   10.42.0.2      k8s-server   <none>           <none>
kube-system   kube-apiserver-k8s-server                               1/1     Running     0             154m   192.168.56.6   k8s-server   <none>           <none>
kube-system   kube-controller-manager-k8s-server                      1/1     Running     0             154m   192.168.56.6   k8s-server   <none>           <none>
kube-system   kube-proxy-k8s-node1                                    1/1     Running     0             143m   192.168.56.8   k8s-node1    <none>           <none>
kube-system   kube-proxy-k8s-server                                   1/1     Running     0             154m   192.168.56.6   k8s-server   <none>           <none>
kube-system   kube-scheduler-k8s-server                               1/1     Running     0             154m   192.168.56.6   k8s-server   <none>           <none>
kube-system   rke2-canal-6z9kx                                        2/2     Running     0             143m   192.168.56.8   k8s-node1    <none>           <none>
kube-system   rke2-canal-ms4v2                                        2/2     Running     0             153m   192.168.56.6   k8s-server   <none>           <none>
kube-system   rke2-coredns-rke2-coredns-55bdf87668-kgmrq              1/1     Running     0             143m   10.42.1.3      k8s-node1    <none>           <none>
kube-system   rke2-coredns-rke2-coredns-55bdf87668-ww75w              1/1     Running     0             153m   10.42.0.4      k8s-server   <none>           <none>
kube-system   rke2-coredns-rke2-coredns-autoscaler-65c8c6bd64-mrznq   1/1     Running     0             153m   10.42.0.3      k8s-server   <none>           <none>
kube-system   rke2-ingress-nginx-controller-27hk4                     1/1     Running     0             142m   10.42.1.2      k8s-node1    <none>           <none>
kube-system   rke2-ingress-nginx-controller-xsth5                     1/1     Running     0             152m   10.42.0.12     k8s-server   <none>           <none>
kube-system   rke2-metrics-server-58ff89f9c7-7x7vt                    1/1     Running     0             152m   10.42.0.9      k8s-server   <none>           <none>
kube-system   rke2-snapshot-controller-58dbcfd956-wcgsj               1/1     Running     0             152m   1

So my question is: what should I do from here ? I'm running out of ideas

0 Answers0