User Process Limits Preventing Fork Bombs on Linux
-
At the best of times, working with ulimits (limits) in Linux can be confusing and complicated. Sometimes it can be seriously challenging to determine if a limit is being imposed by the system, a user, a script or inheritance from a calling process. One of the trickier problems to identify is that of a “fork bomb protection” limit being exceeded. As long as you know to look for it, it is pretty easy, but if you don’t know to look for this overriding limit it can be pretty confusing to identify why processes are failing.
Typically you will encounter this limit when a user receives repeated “Resource temporarily unavailable” errors such as these below:
$ ps aux -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: Resource temporarily unavailable
Or attempting to run a process might prompt “cannot fork [Resource temporarily unavailable]” Our first guess is likely to inspect the /etc/security/limits.conf file and make sure that the user in question is given enough “open files” and possibly “max user processes.” In many cases, simply raising one or the other of these limits may solve the problem. If you collect the output of ulimit from the user session in question you can determine what the limits are that are directly affecting the process calls.
What you may find is that no matter what you do, the “max user processes” or “-u” option in ulimit never goes above 1,024. Is that strange because you raised this limit in the limits.conf file? It is indeed. Before we decide that that is the issue, let’s see how many processes the user in question is using. Remember that Linux does not have a threading model but only has processes and lightweight processes (LWPs) that behave much like threads in other operating systems. The “max user processes” value uses the count of all processes and LWPs combined. So to see how many processes the user is using we can check:
# ps -eLF | grep ^baduser | wc -l 1024
If that number returned, where “baduser” is the username of the offending user account, is at or near 1,024 we can be pretty certain that something has gone wrong and the user has been forking processes rapidly bringing it to the system protection limit. This is an extremely high number, even for very busy servers. It’s not an impossible number, just an abnormally high one.
Red Hat Enterprise Linux, and by extension its derivatives like CentOS, have an extra layer of limits applied specifically to the number of processes that an individual user can call to protect against malicious or accidental fork bombs. This additional limit file is located at /etc/security/limits.d/90-nproc.conf. The purpose of this file is to override any attempt to raise the “max user processes” limit above 1,024 in the normal limits.conf file to protect users from themselves.
Should you need to raise system-wide user limits, you will need to raise the cap in the 90-nproc.conf file and then additionally raise (or limit) them for individual users in the normal limits.conf file. That Red Hat makes such a point of making this hard to raise accidentally should trigger red flags that you probably do not really want to raise this limit. But if you need to do so, that is where the additional limit is kept.
Originally posted on my 2012 Linux blog here: http://web.archive.org/web/20140822224153/http://www.scottalanmiller.com/linux/2013/03/09/user-process-limits-preventing-fork-bombs/