I have a long running process that is eventually going to hit the max open file limit. I know how to change that after it fails, but is there a way to change that for the running process, from the command line?
5 Answers
As documented here, the prlimit command, introduced with util-linux 2.21 allows you to read and change the limits of running processes.
This is a followup to the writable /proc/<pid>/limits, which was not integrated in mainline kernel. This solution should work.
If you don't have prlimit(1) yet, you can find the code to a minimalistic version in the prlimit(2) manpage.
On newer kernels (2.6.32+) on CentOS/RHEL you can change this at runtime with /proc/<pid>/limits:
cd /proc/7671/
[root@host 7671]# cat limits | grep nice
Max nice priority 0 0
[root@host 7671]# echo -n "Max nice priority=5:6" > limits
[root@host 7671]# cat limits | grep nice
Max nice priority 5 6
On newer version of util-linux-ng you can use prlimit command, for more infomation read this link https://superuser.com/questions/404239/setting-ulimit-on-a-running-process
You can try ulimit man ulimit with the -n option however the mag page does not most OS's do not allow this to be set.
You can set a system wide file descriptions limit using sysctl -w fs.file-max=N and make the changes persist post boot up in /etc/sysctl.conf
However I would also suggest looking at the process to see if it really needs to have so many files open at a given time, and if you can in fact close some files down and be more efficient in the process.
- 2,103
The process could change its own soft limits if programmed to do so (or if you manage to hack it), but it can't raise its hard limits unless it has the CAP_SYS_RESOURCE capability. You can inspect the limits at runtime in /proc/$pid/limits .
- 4,495