4

Running Rocky 9.3, attempting to get a bridge for a public Range of IPs for use with Virtual machines.

Same hardware used to run Centos 7.8 machine with it all working, am now trying to set it up the same with Rocky 9.3

# /etc/sysconfig/network-scripts/ifcfg-brpub0 (pointopoint, Hetzner Standard)
DEVICE=brpub0
ONBOOT=yes
TYPE=Bridge
BOOTPROTO=none
IPADDR=xxx.yyy.89.1
NETMASK=255.255.255.240
STP=off
DELAY=0

gives

3: br0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 6a:1f:e1:44:ec:b8 brd ff:ff:ff:ff:ff:ff
    inet xxx.yyy.89.1/28 brd xxx.yyy.89.15 scope global noprefixroute br0
       valid_lft forever preferred_lft forever

I installed vagrant and Virtualbox same as before and although I can ping the bridge on xxx.yyy.89.1 I can't ping the public IP address allocated to the vm.

For reference, on another machine that is still on centos 7, the equivalent is

3: brpub0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 5e:ce:35:cf:d2:90 brd ff:ff:ff:ff:ff:ff
    inet aa.bb.207.145/28 brd aa.bb.207.159 scope global brpub0
       valid_lft forever preferred_lft forever
    inet6 fe80::5**e:35**:f**f:d290/64 scope link
       valid_lft forever preferred_lft forever

Why is my bridge down? whatever I do in terms of nmcli and link/device commands, it stays down. How do I get it up?

--- Edit ---

As requested by @larsks, ip link show master brpub0 on the working machine - has no output

[root@prod-vmhost-114 ~]# ip link show master brpub0
[root@prod-vmhost-114 ~]#

Very detailed description here

2 Answers2

3

What happened between CentOS 7 and Rocky Linux 9? A newer systemd version.

With older systemd, systemd didn't peek on bridges, they were left alone when created. With newer versions of systemd, systemd monitors virtual interface creations and will assign a "stable random" MAC address on it instead of leaving the (fully) random MAC address set by default, as soon as its creation is detected. This has a side effect of immediately setting the bridge's operational state (aka operstate) to DOWN, because this has changed its default mode of operation and it has no bridge port (yet).

So if you create twice a bridge on CentOS 7, you can expect its MAC address to change, similar to this:

# ip link add name testbr0 up type bridge
# ip link show dev testbr0
3: testbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether de:6b:22:95:cf:76 brd ff:ff:ff:ff:ff:ff
# ip link del dev testbr0
# ip link add name testbr0 up type bridge
# ip link show testbr0
4: testbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 02:75:85:6b:a8:d4 brd ff:ff:ff:ff:ff:ff

In this example: first time de:6b:22:95:cf:76 second time 02:75:85:6b:a8:d4

Note how the examples above show the bridge operstate is UNKNOWN.

Now if a MAC address is set on the bridge:

# ip link set dev testbr0 address b6:00:00:00:00:01
# ip link show dev testbr0
3: testbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether b6:00:00:00:00:01 brd ff:ff:ff:ff:ff:ff

This changes the mode of the bridge from "no MAC address assigned" to "been assigned a MAC address".

Notice how the bridge operstate when from UNKNOWN to DOWN.

Now in any Linux system running a newer systemd, including Rocky Linux 9, this assignation is systematically done (this could be verified by leaving a ip monitor link running).

# ip link add name testbr0 up type bridge
# ip link show dev testbr0
21: testbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 92:45:24:9e:58:8e brd ff:ff:ff:ff:ff:ff
# ip link del dev testbr0
# ip link add name testbr0 up type bridge
# ip link show testbr0
22: testbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 92:45:24:9e:58:8e brd ff:ff:ff:ff:ff:ff

Notice how the supposed-to-be-random MAC address is twice the same: systemd, whatever the Network Manager in use, assigned twice the same MAC address on it as soon as it detected it was created without assigning an address.

As above this makes the operstate DOWN instead of UNKNOWN. While the operstate was UNKNOWN, it was temporarily considered as UP, until the first bridge port is assigned to it. Then the behavior would become the same.

As the bridge is operstate DOWN, it won't setup a link-local IPv6 address, as seen in this Q/A: linux ipv6 bridge address does not work when mac address is forced and as seen in OP's examples.

For IPv6 there will be issues, for IPv4 I'm not sure there are visible issues, unless an interface flag such as ignore_routes_with_linkdown is enabled on this interface.

If wanting the bridge to be always in operstate UP, to get a consistent behavior, add a bridge port that is UP. If you don't want to use any, still add a dummy one. Continuing the example above:

# ip link add name testbr0p0 up master testbr0 type dummy
# ip link show testbr0
22: testbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 92:45:24:9e:58:8e brd ff:ff:ff:ff:ff:ff

The bridge interface's operstate is now UP.


More information about systemd's change of behavior and the change of the behavior of the bridge when explicitly assigned or not assigned a MAC address can be seen in these UL SE, SF and SU Q/A where I made an answer:

A.B
  • 13,968
1

I realized after leaving my comment that you were working with VirtualBox, whereas I am using libvirt. These two tools handle network bridging in fundamentally different ways.

Libvirt and Linux bridge devices

Libvirt operates using Linux bridge device. When you create a "network", libvirt creates a Linux bridge device. Out of the box, you would typically have a network named "default" associated with a bridge device named virbr0. With no interfaces attached to the bridge, it looks like this:

$ ip link show virbr0
6: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether e6:ff:5f:32:ed:d7 brd ff:ff:ff:ff:ff:ff

After we start a virtual machine, we see a new interface attached to the bridge:

$ ip link show master virbr0
7: veth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 2e:b6:db:05:a0:e1 brd ff:ff:ff:ff:ff:ff

And the bridge interface is now fully up:

$ ip link show virbr0
6: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether e6:ff:5f:32:ed:d7 brd ff:ff:ff:ff:ff:ff

VirtualBox and bridged adapters

All of the above is completely academic, because a VirtualBox "bridged adapter" has nothing to do with Linux bridge devices. When you connect a VirtualBox interface to a "bridged adapter", VirtualBox creates its own analog of a bridge device to attach the virtual interface to the selected hosted interface.

In other words, if you have an interface eth2 on a public network and you want to connect a VirtualBox machine to that network, you do not create a Linux bridge device, add eth2 to the bridge, and then connect your virtual machine to the bridge.

Instead, you select eth2 as the target of your bridged adapter in VirtualBox.

The equivalent Vagrant configuration would look something like this:

Vagrant.configure("2") do |config|
  config.vm.box = "generic/rocky9"
  config.vm.network "public_network", bridge: "eth2"
end

If we watch the kernel logs while running vagrant up, we'll see something like:

Feb 07 09:36:28 fizzgig kernel: vboxdrv: 00000000a064d924 VBoxDDR0.r0
Feb 07 09:36:28 fizzgig kernel: VBoxNetFlt: attached to 'eth2' / 60:7d:09:44:3e:a9
Feb 07 09:36:28 fizzgig kernel: r8152 2-1:1.0 eth2: entered promiscuous mode
Feb 07 09:36:28 fizzgig kernel: r8152 2-1:1.0 eth2: Promiscuous mode enabled

And at this point, the virtual machine should have access to the public network.

larsks
  • 47,453