9

I'm trying to maximize the NFS/TCP bandwidth between a Linux client and a single NFS server over a 10Gbps link. Although each NFS mount has its own device (major/minor), everything comes down to a single TCP connection and this is a bottleneck.

Using iperf and multiple TCP connections, I can reach ~8Gbps, but NFS is stuck on a single TCP stream and limited to ~2Gbps (no tcp tuning yet).

On Solaris, there's the system setting rpcmod:clnt_max_conns which allow to set how many concurrent TCP connection the system is able to use to a single NFS server.

Is there a Linux equivalent or a way to do so ?

As a workaround, I know I could set many IP on the NFS server, and then load-balance the mountpoints on them but that wouldn't be optimal.

BenMorel
  • 4,685
Benoît
  • 1,361

3 Answers3

3

Starting from linux kernel 5.3 a new mount option "nconnect=X" is available, where X specifies the number of TCP connections to the server to use.

kofemann
  • 5,151
1

You could switch back to UDP and use jumbo packets (4k datagrams for instance). Having your whole network support jumbo frames might help a lot, so that you don't have IP packets reassembly on each side.

0

If you're using NFS v4.1 or greater you may be able to avail of pNFS (parallel NFS). As per the following release info from Red Hat, it's available in RHEL 6.4, but I haven't seen any updates indicating it's availability in other Linux distros.

https://www.redhat.com/about/news/press-archive/2013/2/red-hat-announces-general-availability-of-next-minor-release-of-red-hat-enterprise-linux-6

pNFS is defined in RFC5661 from 2010. You may want to investigate whether or not your storage appliance fully supports pNFS.