malloc() inside an infinte loop

I got an Interview question , What happens when we allocate large chunk of memory using malloc() inside an infinite loop and don't free() it.

I thought of checking the condition with NULL should work when there is no enough memory on heap and it should break the loop , But it didn't happen and program terminates abnormally by printing killed .

Why is this happening and why it doesn't execute the if part when there is no memory to allocate (I mean when malloc() failed) ? What behavior is this ?

My code is :

#include<stdio.h>
#include<stdlib.h>

int main(void) {
  int *px;

  while(1)
    {
     px = malloc(sizeof(int)*1024*1024);
     if (px == NULL)
       {
        printf("Heap Full .. Cannot allocate memory n");
        break;
       }
     else
       printf("Allocated t");
    }
  return 0;
}

EDIT : gcc - 4.5.2 (Linux- Ubuntu -11.04)


If you're running on linux, keep an eye on the first terminal. It will show something like:

OOM error - killing proc 1100

OOM means out of memory.

I think it's also visible in dmesg and/or /var/log/messages and/or /var/log/system depending on the linux distro. You can grep with:

grep -i oom /var/log/*

You could make your program grab memory slowly, and keep an eye on:

watch free -m

You'll see the available swap go down and down. When it gets close to nothing Linux will kill your program and the amount of free memory will go up again.

This is a great link for interpreting the output of free -m : http://www.linuxatemyram.com/


This behaviour can be a problem with apps that are started my init or some other protection mechanism like 'god', you can get into a loop where linux kills the app and init or something starts it up again. If the amount of memory needed is much bigger than the available RAM, it can cause slowness through swapping memory pages to disk.

In some cases linux doesn't kill the program that's causing the trouble but some other process. If it kills init for example, the machine will reboot.

In the worst cases a program or group of processes will request a lot of memory (more than is available in Ram) and attempt to access it repeatedly. Linux has no where fast to put that memory, so it'll have to swap out some page of Ram to disk (the swap partition) and load the page being accessed from disk so the program can see/edit it.

This happens over and over again every milisecond. As disk is 1000s of times slower than RAM, this problem can grind the machine down to a practical halt.


The behaviour depends on the ulimits - see http://www.linuxhowtos.org/Tips%20and%20Tricks/ulimit.htm

If you have a limit on the memory use, you'll see the expected NULL return behaviour, on the other hand if you are not limited, you might see the OOM reaper that you saw etc.


But it didn't happen and program terminates abnormally by printing killed.

Keep in mind, you are not alone. In this case, you were killed by the Out Of Memory killer, it saw your process hogging the memory of the system and it took steps to stop that.

Why is this happening and why it doesn't execute the if part when there is no memory to allocate (I mean when malloc() failed)? What behavior is this?

Well, there's no reason to beleve that the if check wasn't run. Check out the man page for malloc()

By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. In case it turns out that the system is out of memory, one or more processes will be killed by the OOM killer.

So you think you "protected" yourself from an out of memory condition with a NULL check; in reality it only means if you got back a NULL , you wouldn't have deferenced it, it means nothing regarding if you actually got the memory you requested.

链接地址: http://www.djcxy.com/p/86498.html

上一篇: 使用malloc分配更多的内存

下一篇: malloc()在一个无限循环内