Segmentation fault: Stack allocation in a C program in Ubuntu when bufffer>4M

Here's a small program to a college's task:

#include <unistd.h>

#ifndef BUFFERSIZE
#define BUFFERSIZE 1
#endif

main()
{
    char buffer[BUFFERSIZE];
    int i;
    int j = BUFFERSIZE;

    i = read(0, buffer, BUFFERSIZE);

    while (i>0)
    {
        write(1, buffer, i);
        i = read(0, buffer, BUFFERSIZE);
    }

    return 0;
}

There is a alternative using the stdio.h fread and fwrite functions instead.

Well. I compiled this both versions of program with 25 different values of Buffer Size: 1, 2, 4, ..., 2^i with i=0..30

This is a example of how I compile it: gcc -DBUFFERSIZE=8388608 prog_sys.c -o bin/psys.8M

The question: In my machine (Ubuntu Precise 64, more details in the end) all versions of the program works fine: ./psys.1M < data

(data is a small file with 3 line ascii text.)

The problem is: When the buffer size is 8MB or greater. Both versions (using system call or clib functions) crashes with these buffer sizes (Segmentation Fault).

I tested many things. The first version of the code was like this: (...) main() { char buffer[BUFFERSIZE]; int i;

    i = read(0, buffer, BUFFERSIZE);
(...)

This crashs when I call the read function. But with these versions:

main()
{
    char buffer[BUFFERSIZE]; // SEGMENTATION FAULT HERE
    int i;
    int j = BUFFERSIZE;

    i = read(0, buffer, BUFFERSIZE);


main()
{
    int j = BUFFERSIZE; // SEGMENTATION FAULT HERE
    char buffer[BUFFERSIZE];
    int i;

    i = read(0, buffer, BUFFERSIZE);

Both them crashs (SEGFAULT) in the first line of main. However, if I move the buffer out of main to the global scope (thus, allocation in the heap instead the stack), this works fine:

char buffer[BUFFERSIZE]; //NOW GLOBAL AND WORKING FINE
main()
{
    int j = BUFFERSIZE;
    int i;

    i = read(0, buffer, BUFFERSIZE);

I use a Ubuntu Precise 12.04 64bits and Intel i5 M 480 1st generation.

#uname -a
Linux hostname 3.2.0-34-generic #53-Ubuntu SMP Thu Nov 15 10:48:16 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

I don't know the OS limitations about the stack. Is there some way to allocate big data in stack, even if this is not a good pratice?


The stack size is quite often limited in Linux. The command ulimit -s will give the current value, in Kbytes. You can change the default in (usually) the file /etc/security/limits.conf . You can also, depending on privileges, change it on a per-process basis via the code:

#include <sys/resource.h>
// ...
struct rlimit x;
if (getrlimit(RLIMIT_STACK, &x) < 0)
    perror("getrlimit");
x.rlim_cur = RLIM_INFINITY;
if (setrlimit(RLIMIT_STACK, &x) < 0)
    perror("setrlimit");

I think it is pretty simple: if you try to make the buffer too big, it overflows the stack.

If you use malloc() to allocate the buffer, I bet you will have no problem.

Note that there is a function called alloca() that explicitly allocates stack storage. Using alloca() is pretty much the same thing as declaring a stack variable, except that it happens while your program is running. Reading the man page for alloca() or discussions of it may help you to understand what is going wrong with your program. Here's a good discussion:

Why is the use of alloca() not considered good practice?

EDIT: In a comment, @jim mcnamara told us about ulimit , a command-line tool that can check on user limits. On this computer (running Linux Mint 14, so it should be the same limits as Ubuntu 12.10) the ulimit -s command shows a stack size limit of: 8192 K-bytes, which tracks nicely with the problem as you describe it.

EDIT: In case it wasn't completely clear, I recommend that you solve the problem by calling malloc() . Another acceptable solution would be to statically allocate the memory, which will work fine as long as your code is single-threaded.

You should only use stack allocation for small buffers, precisely because you don't want to blow up your stack. If your buffers are big, or your code will recursively call a function many times such that the little buffers will be allocated many many times, you will clobber your stack and your code will crash. The worst part is that you won't get any warning: it will either be fine, or your program already crashed.

The only disadvantage of malloc() is that it is relatively slow, so you don't want to call malloc() inside time-critical code. But it's fine for initial setup; once the malloc is done, the address of your allocated buffer is just an address in memory like any other memory address within your program's address space.

I specifically recommend against editing the system defaults to make the stack size bigger, because that makes your program much less portable. If you call standard C library functions like malloc() you could port your code to Windows, Mac, Android, etc. with little trouble; if you start calling system functions to change the default stack size, you would have much more problems porting. (And you may have no plans to port this now, but plans change!)

链接地址: http://www.djcxy.com/p/80286.html

上一篇: 如何检测程序虚拟空间中堆栈的顶部

下一篇: 分段错误:在buff> 4M时,在Ubuntu的C程序中堆栈分配