s unlimited" do?

There are understandably many related questions on stack allocation

What and where are the stack and heap?

Why is there a limit on the stack size?

Size of stack and heap memory

However on various *nix machines I can issue the bash command

ulimit -s unlimited

or the csh command

set stacksize unlimited

How does this change how programs are executed? Are there any impacts on program or system performance (eg, why wouldn't this be the default)?

In case more system details are relevant, I'm mostly concerned with programs compiled with GCC on Linux running on x86_64 hardware.


This will be a little bit pedantic and some of it you probably know already, so bear with me. When you declare variables in programs the kernel allocates space for their data in the stack. You can tell the kernel to limit how much space in the stack (or the heap, for that matter) any given program can use, that way one program can't just take up the whole stack. If there was no limit on how much of the stack a program could use up, bugs that would normally cause a program to crash would instead crash the entire system. The kernel crashing a program that goes above allocated stack space is called a "stack overflow".

One of the most common bugs with the stack is excessive or infinite recursion. Since each new call to a function causes all of it's variables to be placed on the stack, non-tail optimized recursive programs can quickly deplete the stack space allocated to a process by the kernel. For example, this infinitely recursive function would result in a crash from the kernel once it exceeded allocated stack space:

int smash_the_stack(int number) {
    smash_the_stack(number + 1);

    return 0;
}

Running out of stack space is traditionally a very scary thing, as it can be used in something called "stack smashing", or stack buffer overflow. This occurs when a malicious user intentionally causes a stack overflow to change the stack pointer to execute arbitrary instructions of their own, instead of the instructions in your own code.

As far as performance, there should be no impact whatsoever. If you are hitting your stack limit via recursion raising the stack size is probably not the best solution, but otherwise it isn't something you should have to worry about. If a program absolutely must store massive amounts of data it can use the heap instead.


Mea culpa, stack size can indeed be unlimited. _STK_LIM is the default, _STK_LIM_MAX is something that differs per architecture, as can be seen from include/asm-generic/resource.h :

/*
 * RLIMIT_STACK default maximum - some architectures override it:
 */
#ifndef _STK_LIM_MAX
# define _STK_LIM_MAX           RLIM_INFINITY
#endif

As can be seen from this example generic value is infinite, where RLIM_INFINITY is, again, in generic case defined as:

/*
 * SuS says limits have to be unsigned.
 * Which makes a ton more sense anyway.
 *
 * Some architectures override this (for compatibility reasons):
 */
#ifndef RLIM_INFINITY
# define RLIM_INFINITY          (~0UL)
#endif

So I guess the real answer is - stack size CAN be limited by some architecture, then unlimited stack trace will mean whatever _STK_LIM_MAX is defined to, and in case it's infinity - it is infinite. For details on what it means to set it to infinite and what implications it might have, refer to the other answer, it's way better than mine.


"ulimit -s unlimited" lets the stack grow unlimited. This may prevent your program from crashing if you write programs by recursion, especially if your programs are not tail recursive (compilers can "optimize" those), and the depth of recursion is large.

The answer by @Maxwell Hansen almost contains the right answer to the question. However, it is buried deep in a multitude of false claims -- see the comments. Thus, I felt obligated to write this answer.

链接地址: http://www.djcxy.com/p/14090.html

上一篇: 多个堆栈和堆放在虚拟内存中哪里?

下一篇: 无限的“吗?