views:

447

answers:

7

I would like to write a program to consume all the memory available to understand the outcome. I've heard that linux starts killing the processes once it is unable to allocate the memory.

Can anyone help me with such a program.

I have written the following, but the memory doesn't seem to get exhausted:

#include <stdlib.h>

int main()
{
        while(1)
        {
                malloc(1024*1024);
        }
        return 0;
}
+4  A: 

Linux "over commits" memory. This means that physical memory is only given to a process when the process first tries to access it, not when the malloc is first executed. To disable this behavior, do the following (as root):

echo 2 > /proc/sys/vm/overcommit_memory

Then try running your program.

Zach Hirsch
Good one. Thanks for the info.
Hemant Kumar
Thanks for the info. I did that. but still the memory usage is only 1-2%.
Mark
You need to write to a page, otherwise it won't be mapped to physical memory.
drhirsch
+1  A: 

Linux uses, by default, what I like to call "opportunistic allocation". This is based on the observation that a number of real programs allocate more memory than they actually use. Linux uses this to fit a bit more stuff into memory: it only allocates a memory page when it is used, not when it's allocated with malloc (or mmap or sbrk).

You may have more success if you do something like this inside your loop:

memset(malloc(1024*1024L), 'w', 1024*1024L);
Lars Wirzenius
Thanks. I got segfault. I understand that malloc fails and memset tries to access memory which wasn't allocated and hence segfaults.
Mark
A: 

I was bored once and did this. Got this to eat up all memory and needed to force a reboot to get it working again.

#include <stdlib.h>
#include <unistd.h>

int main(int argc, char** argv)
{
    while(1)
    {
        malloc(1024 * 4);
        fork();
    }
}
phantombrain
Why is that if I run the program mentioned above the machine hangs, but if I use the memory allocated my malloc, the program gets segfault?
Mark
It's likely that this is not exhausting memory, since it's not doing anything with the memory it's allocating (see above comments on optimistic memory allocation/use), but rather acting as a normal fork-bomb and exhausting the system's ability to context-switch.
alesplin
+5  A: 

You should write to the allocated blocks. If you just ask for memory, linux might just hand out a reservation for memory, but nothing will be allocated until the memory is accessed.

int main()
{
        while(1)
        {
                void *m = malloc(1024*1024);
                memset(m,0,1024*1024);
        }
        return 0;
}

You really only need to write 1 byte on every page (4096 bytes on x86 normally) though.

nos
Thanks. The memory decreases on the system to a certain extent. After that the program crashes(segfault)
Mark
@Mark: That is very probably because malloc() returns NULL as it fails to allocate more memory, and you can't write to NULL.
unwind
In other words, given http://www.youtube.com/watch?v=A7uvttu8ct0, your program is Jerry, and Linux is the woman at the car rental service.
jhs
+1  A: 

Have a look at this program. When there is no longer enough memory malloc starts returning 0

#include <stdlib.h>
#include <stdio.h>

int main()
{
  while(1)
  {
    printf("malloc %d\n", (int)malloc(1024*1024));
  }
  return 0;
}
gnibbler
A: 

On a 32-bit Linux system, the maximum that a single process can allocate in its address space is approximately 3Gb.

This means that it is unlikely that you'll exhaust the memory with a single process.

On the other hand, on 64-bit machine you can allocate as much as you like.

As others have noted, it is also necessary to initialise the memory otherwise it does not actually consume pages.

malloc will start giving an error if EITHER the OS has no virtual memory left OR the process is out of address space (or has insufficient to satisfy the requested allocation).

Linux's VM overcommit also affects exactly when this is and what happens, as others have noted.

MarkR
A: 

A little known fact (though it is well documented) - you can (as root) prevent the OOM killer from claiming your process (or any other process) as one of its victims. Here is a snippet from something directly out of my editor, where I am (based on configuration data) locking all allocated memory to avoid being paged out and (optionally) telling the OOM killer not to bother me:

static int set_priority(nex_payload_t *p)
{
    struct sched_param sched;
    int maxpri, minpri;
    FILE *fp;
    int no_oom = -17;

    if (p->cfg.lock_memory)
     mlockall(MCL_CURRENT | MCL_FUTURE);

    if (p->cfg.prevent_oom) {
     fp = fopen("/proc/self/oom_adj", "w");
     if (fp) {
            /* Don't OOM me, Bro! */
      fprintf(fp, "%d", no_oom);
      fclose(fp);
     }
    }

I'm not showing what I'm doing with scheduler parameters as its not relevant to the question.

This will prevent the OOM killer from getting your process before it has a chance to produce the (in this case) desired effect. You will also, in effect, force most other processes to disk.

So, in short, to see fireworks really quickly...

  1. Tell the OOM killer not to bother you
  2. Lock your memory
  3. Allocate and initialize (zero out) blocks in a never ending loop, or until malloc() fails

Be sure to look at ulimit as well, and run your tests as root.

The code I showed is part of a daemon that simply can not fail, it runs at a very high weight (selectively using the RR or FIFO scheduler) and can not (ever) be paged out.

Tim Post