0

I have a a fresh Ubuntu 22.04 running in KVM with 6 virtio interface emulating e1000. This is connected to an Open vSwitch and LACP is configured on both switch and Ubuntu.

Summery of my problem is that when I do tcpdump on any of the interface in the bond0 I can see the LACP pdu sent by ovs, but no response from Ubuntu is seen. Ubuntu does not seems to send an LACP pdu on its own either.

My configuration details are as follows:

/etc/netplan/00-installer-config.yaml

  bonds:
    bond0:
      addresses: [192.168.201.141/24]
      interfaces:
        - enp2s0
        - enp3s0
        - enp4s0
        - enp5s0
        - enp6s0
        - enp7s0
      parameters:
        mode: 802.3ad
        mii-monitor-interval: 100

ip link

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:bd:d6:54 brd ff:ff:ff:ff:ff:ff
3: enp2s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 32:4c:6b:d8:b0:dc brd ff:ff:ff:ff:ff:ff permaddr 52:54:00:aa:fb:bb
4: enp3s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 32:4c:6b:d8:b0:dc brd ff:ff:ff:ff:ff:ff permaddr 52:54:00:0d:61:fc
5: enp4s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 32:4c:6b:d8:b0:dc brd ff:ff:ff:ff:ff:ff permaddr 52:54:00:d4:cc:88
6: enp5s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 32:4c:6b:d8:b0:dc brd ff:ff:ff:ff:ff:ff permaddr 52:54:00:79:8e:a0
7: enp6s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 32:4c:6b:d8:b0:dc brd ff:ff:ff:ff:ff:ff permaddr 52:54:00:24:36:a7
8: enp7s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 32:4c:6b:d8:b0:dc brd ff:ff:ff:ff:ff:ff permaddr 52:54:00:c7:90:3c
9: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 32:4c:6b:d8:b0:dc brd ff:ff:ff:ff:ff:ff

I can see that all links are UP.

/proc/net/bonding/bond0

Ethernet Channel Bonding Driver: v5.15.0-86-generic

Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 100
Up Delay (ms): 0 Down Delay (ms): 0 Peer Notification Delay (ms): 0

802.3ad info LACP active: on LACP rate: slow Min links: 0 Aggregator selection policy (ad_select): stable System priority: 65535 System MAC address: 32:4c:6b:d8:b0:dc Active Aggregator Info: Aggregator ID: 1 Number of ports: 1 Actor Key: 0 Partner Key: 1 Partner Mac Address: 00:00:00:00:00:00

Slave Interface: enp7s0 MII Status: up Speed: Unknown Duplex: Unknown Link Failure Count: 0 Permanent HW addr: 52:54:00:c7:90:3c Slave queue ID: 0 Aggregator ID: 1 Actor Churn State: none Partner Churn State: churned Actor Churned Count: 0 Partner Churned Count: 1 details actor lacp pdu: system priority: 65535 system mac address: 32:4c:6b:d8:b0:dc port key: 0 port priority: 255 port number: 1 port state: 77 details partner lacp pdu: system priority: 65535 system mac address: 00:00:00:00:00:00 oper key: 1 port priority: 255 port number: 1 port state: 1 <snip>

The above output shows actor and partner pdu details, but tcpdump does not show a pdu being sent by Ubuntu.

Also the speed and duplex is unknown, most possibly becuase the underlaying interface is virtio, but could that be a reason for Ubuntu not sending LACP pdu?

This is how an interface is described in kvm:

   <interface type='bridge'>
      <mac address='52:54:00:d4:cc:88'/>
      <source bridge='ovsbr-lacp0'/>
      <virtualport type='openvswitch'>
        <parameters interfaceid='0e117751-fd45-4840-9c30-41ea8f76bdce'/>
      </virtualport>
      <model type='e1000'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </interface>

This is the tcpdump output seen on both Open vSwitch and in Ubuntu. fe:54:00:c7:90:3c is the MAC of one of the enslaved interface in ovs and a6:a0:01:43:3a:41 is the MAC of bond interface in ovs.

tcpdump -evni enp7s0

tcpdump: listening on enp7s0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
19:41:47.129221 fe:54:00:c7:90:3c > 01:80:c2:00:00:02, ethertype Slow Protocols (0x8809), length 124: LACPv1, length 110
        Actor Information TLV (0x01), length 20
          System a6:a0:01:43:3a:41, System Priority 65534, Key 15, Port 17, Port Priority 65535
          State Flags [Activity, Aggregation, Collecting, Distributing, Default]
        Partner Information TLV (0x02), length 20
          System 00:00:00:00:00:00, System Priority 0, Key 0, Port 0, Port Priority 0
          State Flags [none]
        Collector Information TLV (0x03), length 16
          Max Delay 0
        Terminator TLV (0x00), length 0

Ubuntu does not seems to read this packet, nor send a response to it, it does not even generate its own LACP pdu either.

in dmesg I can see this logs:

[ 4722.457866] bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond
[ 4752.513642] bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond

which is strange, because I can see the packets sent by ovs in tcpdump.

There is one related question here which looks like the situation I am in, but the solution does not apply to my case.

I have checked everything in ovs end and this thread shows the results.

I have exhausted by debugfoo and any clues to get this working will be much appreciated.

X.

PS: I have posted this in Ask Ubuntu but without any response, so cross posting here for a wider audience.

1 Answers1

0

I took another stab at this on another set of VM and it's now working!

The difference I can see is that I added the interfaces in netplan yml ie:

network:
  ethernets:
    ens1:
      addresses:
      - 192.168.200.148/24
      nameservers:
        addresses:
        - 8.8.8.8
        - 8.8.4.4
      routes:
      - to: default
        via: 192.168.200.1
    enp2s11:
      dhcp4: no
    enp2s12:
      dhcp4: no
    enp2s13:
      dhcp4: no
    enp2s14:
      dhcp4: no
    enp2s15:
      dhcp4: no
    enp2s16:
      dhcp4: no
    enp2s17:
      dhcp4: no
    enp2s18:
      dhcp4: no
  bonds:
    bond0:
      addresses: [192.168.201.141/24]
      interfaces:
         - enp2s11
         - enp2s12
         - enp2s13
         - enp2s14
         - enp2s15
         - enp2s16
         - enp2s17
         - enp2s18
      parameters:
        mode: 802.3ad
        mii-monitor-interval: 100
  renderer: networkd
  version: 2

If you notice, the device enp2s11 is added twice, first under ethernets and the under bond:interfaces. This is the only change I can think of from previous configuration.

At the OVS side:

sudo ovs-appctl bond/show bond0
---- bond0 ----
bond_mode: active-backup
bond may use recirculation: no, Recirc-ID : -1
bond-hash-basis: 0
lb_output action: disabled, bond-id: -1
updelay: 0 ms
downdelay: 0 ms
lacp_status: negotiated
lacp_fallback_ab: false
active-backup primary: <none>
active member mac: fe:54:00:7c:71:79(vnet365)

member vnet365: enabled active member may_enable: true

member vnet366: enabled may_enable: true

member vnet367: enabled may_enable: true

member vnet368: enabled may_enable: true

member vnet369: enabled may_enable: true

member vnet370: enabled may_enable: true

member vnet371: enabled may_enable: true

member vnet372: enabled may_enable: true

I am able to ping between another host in the same network/switch and so I guess its working fine.