41

Okay, this is creeping me out - I see about 1500-2500 of these:

root@wherever:# netstat

Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 localhost:60930         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60934         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60941         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60947         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60962         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60969         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60998         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60802         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60823         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60876         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60886         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60898         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60897         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60905         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60918         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60921         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60673         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60680         localhost:sunrpc        TIME_WAIT  
[etc...]

root@wherever:# netstat | grep 'TIME_WAIT' |wc -l
1942

That number is changing rapidly.

I do have a pretty tight iptables config so I have no idea what can cause this. any ideas?

Thanks,

Tamas

Edit: Output of 'netstat -anp':

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:60968         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60972         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60976         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60981         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60980         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60983         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60999         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60809         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60834         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60872         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60896         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60919         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60710         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60745         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60765         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60772         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60558         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60564         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60600         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60624         127.0.0.1:111           TIME_WAIT   -               
KTamas
  • 619

7 Answers7

36

TIME_WAIT is normal. It's a state after a socket has closed, used by the kernel to keep track of packets which may have got lost and turned up late to the party. A high number of TIME_WAIT connections is a symptom of getting lots of short lived connections, not nothing to worry about.

David Pashley
  • 23,963
33

EDIT: tcp_fin_timeout DOES NOT control TIME_WAIT duration, it is hardcoded at 60s

As mentioned by others, having some connections in TIME_WAIT is a normal part of the TCP connection. You can see the interval by examining /proc/sys/net/ipv4/tcp_fin_timeout:

[root@host ~]# cat /proc/sys/net/ipv4/tcp_fin_timeout
60

And change it by modifying that value:

[root@dev admin]# echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout

Or permanently by adding it to /etc/sysctl.conf

net.ipv4.tcp_fin_timeout=30

Also, if you don't use the RPC service or NFS, you can just turn it off:

/etc/init.d/nfsd stop

And turn it off completely

chkconfig nfsd off
Greg Bray
  • 5,740
Brandon
  • 1,336
7

tcp_fin_timeout does NOT control TIME_WAIT delay. You can see this by using ss or netstat with -o to see the countdown timers:

cat /proc/sys/net/ipv4/tcp_fin_timeout
3

# See countdown timer for all TIME_WAIT sockets in 192.168.0.0-255
ss --numeric -o state time-wait dst 192.168.0.0/24

NetidRecv-Q  Send-Q    Local Address:Port    Peer Address:Port                             
tcp  0       0         192.168.100.1:57516   192.168.0.10:80    timer:(timewait,55sec,0)   
tcp  0       0         192.168.100.1:57356   192.168.0.10:80    timer:(timewait,25sec,0)   
tcp  0       0         192.168.100.1:57334   192.168.0.10:80    timer:(timewait,22sec,0)   
tcp  0       0         192.168.100.1:57282   192.168.0.10:80    timer:(timewait,12sec,0)   
tcp  0       0         192.168.100.1:57418   192.168.0.10:80    timer:(timewait,38sec,0)   
tcp  0       0         192.168.100.1:57458   192.168.0.10:80    timer:(timewait,46sec,0)   
tcp  0       0         192.168.100.1:57252   192.168.0.10:80    timer:(timewait,7.436ms,0) 
tcp  0       0         192.168.100.1:57244   192.168.0.10:80    timer:(timewait,6.536ms,0)

even with tcp_fin_timeout set to 3 the countdown for TIME_WAIT still starts at 60. However if you have net.ipv4.tcp_tw_reuse set to 1 (echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse) then the kernel can reuse sockets in TIME_WAIT if it determines there won't be any possible conflicts in TCP segment numbering.

Greg Bray
  • 5,740
6

It isn't important. All that signifies is that you're opening and closing a lot of Sun RCP TCP connections (1500-2500 of them every 2-4 minutes). The TIME_WAIT state is what a socket goes into when it closes, to prevent messages from arriving for the wrong applications like they might if the socket were reused too quickly, and for a couple of other useful purposes. Don't worry about it.

(Unless, of course, you aren't actually running anything that should be processing that many RCP operations. Then, worry.)

chaos
  • 7,533
4

Something on your system is doing a lot of RPC (Remote Procedure Calls) within your system (notice both source and destination is localhost). That's often seen for lockd for NFS mounts, but you might also see it for other RPC calls like rpc.statd or rpc.spray.

You could try using "lsof -i" to see who has those sockets open and see what's doing it. It's probably harmless.

Paul Tomblin
  • 5,285
3

I had the same problem too. I cost me several hours to find out what is going on. In my case, the reason for this was that netstat tries to lookup the hostname corresponding to the IP (I assume it's using the gethostbyaddr API). I was using an embedded Linux installation which had no /etc/nsswitch.conf. To my surprise, the problem only exists when you are actually doing a netstat -a (found this out by running portmap in verbose and debug mode).

Now what happened was the following: Per default, the lookup functions also try to contact the ypbind daemon (Sun Yellow Pages, also known as NIS) to query for a hostname. To query this service, the portmapper portmap has to be contacted to get the port for this service. Now the portmapper in my case got contacted via TCP. The portmapper then tells the libc function that no such service exists and the TCP connection gets closed. As we know, closed TCP connections enter a TIME_WAIT state for some time. So netstat catches this connection when listing and this new line with a new IP issues a new request that generates a new connection in TIME_WAIT state and so on...

In order to solve this issue, create a /etc/nsswitch.conf which is not using the rpc NIS services i.e. with the following contents:

passwd:         files
group:          files
hosts:          files dns
networks:       files dns
services:       files
protocols:      files
netmasks:       files
leecher
  • 31
1

Note that ss will not show all TIME_WAIT's unless -a is specified.

In case it helps someone else:

We were trying to figure out why netstat -antp output was different from ss -ntp but missed the -a like we had on netstat.

KJ7LNW
  • 251