0

I have a "client" machine with 8 ethernet interfaces. (conf as dhcp) (These interfaces are plugged into a special switch which has a vlan conf/port such as plugging into a specific port always gets you the same ip.)

I have a "server" machine which wants to instigate tcp traffic on the client machine via all its interface to maximize bandwidth. (The server is plugged into the same switch with a fiber cable to sustain the load of the 8 1GbE)

My problem is that client machine is routing all the traffic into only one of its interface, thus my transfer speed caps at about 120MB/s.

Extract of route commands on the client machine:
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt  Iface
0.0.0.0         10.11.13.1      0.0.0.0         UG        0 0          0 eth4
10.11.9.0       0.0.0.0         255.255.255.0   U         0 0          0 eth10
10.11.9.2       10.11.9.1       255.255.255.255 UGH       0 0          0 eth10 # I ADDED THIS ONE
10.11.10.0      0.0.0.0         255.255.255.0   U         0 0          0 eth11
10.11.10.2      10.11.10.1      255.255.255.255 UGH       0 0          0 eth11 # I ADDED THIS ONE
10.11.11.0      0.0.0.0         255.255.255.0   U         0 0          0 eth9
10.11.12.0      0.0.0.0         255.255.255.0   U         0 0          0 eth8
10.11.13.0      0.0.0.0         255.255.255.0   U         0 0          0 eth4
10.11.14.0      0.0.0.0         255.255.255.0   U         0 0          0 eth7
10.11.15.0      0.0.0.0         255.255.255.0   U         0 0          0 eth6
10.11.16.0      0.0.0.0         255.255.255.0   U         0 0          0 eth5

169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 usb0 169.254.95.0 0.0.0.0 255.255.255.0 U 0 0 0 usb0

I understand why it's happening based on this output. You can see I try to modify it, but it seems I do not understand the problem well enough.

I hope you can help!

2 Answers2

0

It would help to know your distribution. In any case, what you want to do is to use bonding. This is how it could be done on CentOS (your question suggests that you want mode=4 (Dynamic Link Aggregation) if your switch can support it or mode=6 which is likely what you were trying to achieve if you want to load-balance both incoming and outgoing traffic: https://wiki.centos.org/TipsAndTricks/BondingInterfaces

This is somebody did it for CentOS 7 (the latest CentOS): http://www.unixmen.com/linux-basics-create-network-bonding-on-centos-76-5/

galaxy
  • 2,227
0

Bonding is definitely what you want to do, but it may not give you the results you are hoping for. As Jeroen pointed out in a comment above, bonding 8x1Gb will not give you a 1x8Gb connection. You'll end up with 1Gb connection that is 8x harder to saturate. But if you are primarily talking to the same server, you probably won't approach bond saturation.

See this question for a nice explanation of the bonding modes: What are the differences between channel bonding modes in Linux? and pay close attention to this final paragraph in answer #2:

Note: whatever you do, one network connection always go through one and 
only one physical link. So when aggregating GigE interfaces, a file 
transfer from machine A to machine B can't top 1 gigabit/s, even if each
machine has 4 aggregated GigE interfaces (whatever the bonding mode in use).

You might want to consider a 10Gb interface for your client machine if you truly need that much bandwidth.

Edit: In light of clarified requirements from OP: Make the connection between your server and switch a trunk, assign all the vlans to it, then add an IP for each vlan on the server.