There are understandably many related questions on stack allocation
What and where are the stack and heap?
Why is there a limit on the stack size?
However on various *nix machines I can issue the bash command
ulimit -s unlimited
or the csh command
set stacksize unlimited
How does this change how programs are executed? Are there any impacts on program or system performance (e.g., why wouldn't this be the default)?
In case more system details are relevant, I'm mostly concerned with programs compiled with GCC on Linux running on x86_64 hardware.
When you call a function, a new "namespace" is allocated on the stack. That's how functions can have local variables. As functions call functions, which in turn call functions, we keep allocating more and more space on the stack to maintain this deep hierarchy of namespaces.
To curb programs using massive amounts of stack space, a limit is usually put in place via ulimit -s
. If we remove that limit via ulimit -s unlimited
, our programs will be able to keep gobbling up RAM for their evergrowing stack until eventually the system runs out of memory entirely.
int eat_stack_space(void) { return eat_stack_space(); }
// If we compile this with no optimization and run it, our computer could crash.
Usually, using a ton of stack space is accidental or a symptom of very deep recursion that probably should not be relying so much on the stack. Thus the stack limit.
Impact on performace is minor but does exist. Using the time
command, I found that eliminating the stack limit increased performance by a few fractions of a second (at least on 64bit Ubuntu).