Does malloc lazily create the backing pages for an allocation on Linux (and other platforms)?

On Linux if I were to malloc(1024 * 1024 * 1024) , what does malloc actually do?

I'm sure it assigns a virtual address to the allocation (by walking the free list and creating a new mapping if necessary), but does it actually create 1 GiB worth of swap pages? Or does it mprotect the address range and create the pages when you actually touch them like mmap does?

(I'm specifying Linux because the standard is silent on these kinds of details, but I'd be interested to know what other platforms do as well.)


Linux does deferred page allocation, aka. 'optimistic memory allocation'. The memory you get back from malloc is not backed by anything and when you touch it you may actually get an OOM condition (if there is no swap space for the page you request), in which case a process is unceremoniously terminated.

See for example http://www.linuxdevcenter.com/pub/a/linux/2006/11/30/linux-out-of-memory.html


9. Memory (part of The Linux kernel, Some remarks on the Linux Kernel by Andries Brouwer) is a good document.

It contains the following programs that demonstrate Linux's handling of physical memory versus actual memory and explains the kernel's internals.

Typically, the first demo program will get a very large amount of memory before malloc() returns NULL. The second demo program will get a much smaller amount of memory, now that earlier obtained memory is actually used. The third program will get the same large amount as the first program, and then it is killed when it wants to use its memory.

Demo program 1: allocate memory without using it.

#include <stdio.h>
#include <stdlib.h>

int main (void) {
    int n = 0;

    while (1) {
        if (malloc(1<<20) == NULL) {
                printf("malloc failure after %d MiBn", n);
                return 0;
        }
        printf ("got %d MiBn", ++n);
    }
}

Demo program 2: allocate memory and actually touch it all.

#include <stdio.h>
#include <string.h>
#include <stdlib.h>

int main (void) {
    int n = 0;
    char *p;

    while (1) {
        if ((p = malloc(1<<20)) == NULL) {
                printf("malloc failure after %d MiBn", n);
                return 0;
        }
        memset (p, 0, (1<<20));
        printf ("got %d MiBn", ++n);
    }
}

Demo program 3: first allocate, and use later.

#include <stdio.h>
#include <string.h>
#include <stdlib.h>

#define N       10000

int main (void) {
    int i, n = 0;
    char *pp[N];

    for (n = 0; n < N; n++) {
        pp[n] = malloc(1<<20);
        if (pp[n] == NULL)
            break;
    }
    printf("malloc failure after %d MiBn", n);

    for (i = 0; i < n; i++) {
        memset (pp[i], 0, (1<<20));
        printf("%dn", i+1);
    }

    return 0;
}

(On a well-functioning system, like Solaris, the three demo programs obtain the same amount of memory and do not crash, but see malloc() return NULL.)


I gave this answer to a similar post on the same subject:

Are some allocators lazy?

This starts a little off subject (and then I'll tie it in to your question), but what's happening is similar to what happens when you fork a process in Linux. When forking there is a mechanism called copy on write which only copies the memory space for the new process when the memory is written too. This way if the forked process exec's a new program right away then you've saved the overhead of copying the original programs memory.

Getting back to your question, the idea is similar. As others have pointed out, requesting the memory gets you the virtual memory space immediately, but the actual pages are only allocated when write to them.

What's the purpose of this? It basically makes mallocing memory a more or less constant time operation Big O(1) instead of a Big O(n) operation (similar to the way the Linux scheduler spreads it's work out instead of doing it in one big chunk).

To demonstrate what I mean I did the following experiment:

rbarnes@rbarnes-desktop:~/test_code$ time ./bigmalloc

real    0m0.005s
user    0m0.000s
sys 0m0.004s
rbarnes@rbarnes-desktop:~/test_code$ time ./deadbeef

real    0m0.558s
user    0m0.000s
sys 0m0.492s
rbarnes@rbarnes-desktop:~/test_code$ time ./justwrites

real    0m0.006s
user    0m0.000s
sys 0m0.008s

The bigmalloc program allocates 20 million ints, but doesn't do anything with them. deadbeef writes one int to each page resulting in 19531 writes and justwrites allocates 19531 ints and zeros them out. As you can see deadbeef takes about 100 times longer to execute than bigmalloc and about 50 times longer than justwrites.

#include <stdlib.h>

int main(int argc, char **argv) {

    int *big = malloc(sizeof(int)*20000000); // Allocate 80 million bytes

    return 0;
}

.

#include <stdlib.h>

int main(int argc, char **argv) {

    int *big = malloc(sizeof(int)*20000000); // Allocate 80 million bytes

    // Immediately write to each page to simulate an all-at-once allocation
    // assuming 4k page size on a 32-bit machine.

    for (int* end = big + 20000000; big < end; big += 1024)
        *big = 0xDEADBEEF;

    return 0;
}

.

#include <stdlib.h>

int main(int argc, char **argv) {

    int *big = calloc(sizeof(int), 19531); // Number of writes

    return 0;
}
链接地址: http://www.djcxy.com/p/28446.html

上一篇: calloc()与malloc()

下一篇: malloc是否懒惰地为Linux(和其他平台)上的分配创建后备页?