0

I have a small bare-metal Kubernetes cluster (bootstrapped via kubeadm) which I'm trying to install Cilium on, all of the nodes have two physical network interfaces - a main interface which is a bridge (br0) and a secondary management interface (eth0), when I install Cilium I lose connectivity on the management interface, i.e. ping stops working as well as other traffic which uses that interface. I've tried using the devices config option set to br0 only and with both interfaces in devices and nodePort.directRoutingDevice set to br0. I'm sure I'm doing something stupid, does anyone have any idea what it might be?

Additional info: I'm using Cilium in kube-proxy-replacement mode, the loss of connectivity seems to happen both in encapsulation mode and using native routing. If I reboot a node then connectivity returns briefly until the Cilium pod starts up, because of this I don't believe it's an issue with routing or iptables. Cilium itself works fine, connectivity between pods, services, nodes, etc. all works perfectly, but it seems to nerf the secondary interface on the nodes. Nodes are running Ubuntu 24.04

0 Answers0