tags:

views:

116

answers:

6

Hi,

I am writing a program to leak memory( main memory ) to test how the system behaves with low system memory and swap memory. We are using the following loop which runs periodically and leaks memory

main(int argc, char* argv[] )  
{
   int arg_mem = argv[1];

        while(1)
        {
          u_int_ptr =(unsigned int*)  malloc(arg_mem * 1024 * 1024);

        if( u_int_ptr == NULL )
           printf("\n leakyapp Daemon FAILED due to insufficient available memory....");

          sleep( arg_time );
        }

}

Above loop runs for sometime and prints the message "leakyapp Daemon FAILED due to insufficient available memory...." . But when I run the command "free" I can see that running this program has no effect either on Main memory or Swap.

Am I doing something wrong ?

A: 

What does ulimit -m -v print?

Explanation: On any server OS, you can limit the amount of resources a process can allocate to make sure that a single runaway process can't bring down the whole machine.

Aaron Digulla
It shows max memory size (kbytes, -m) unlimitedvirtual memory (kbytes, -v) unlimited
siri
In that case, my answer doesn't help :-)
Aaron Digulla
+3  A: 

There might be some sort of copy-on-write optimization. I would suggest actually writing something to the memory you are allocating.

Karl Bielefeldt
A: 

I'm guessing (based on the command line argument) that you're using a desktop/server OS and not an embedded system.

Allocating memory like this is probably not consuming much RAM. Your memory allocation might not have even succeeded - on some OSs (e.g. Linux), malloc() can return non-NULL even when you ask for more memory than is available.

Without knowing what your OS is and exactly what you're trying to test, it's difficult to suggest anything specific, but you might want to look at more low level ways of allocating memory than malloc(), or ways of controlling the virtual memory system. On Linux you might want to look at mlock().

Matt Curtis
I am using SUSE Linux Enterprise edition .
siri
+7  A: 

Physical memory is not committed to your allocations until you actually write into it.

If you have a kernel version after 2.6.23, use mmap() with the MAP_POPULATE flag instead of malloc():

u_int_ptr = mmap(NULL, arg_mem * 1024 * 1024, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_POPULATE, -1, 0);

if (u_int_ptr == MAP_FAILED)
    /* ... */

If you have an older kernel, you'll have to touch each page in the allocation.

caf
I think the "don't" in the last line should be removed:-)
Job
I changed my code accordingly I have declared a array to store all the pointers char * a[1000];And in the while loop u_int_ptr = mmap(NULL, arg_mem * 1024 * 1024, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_POPULATE, -1, 0);a[i] = (char*) u_int_ptr; // Touch page *a[i] = 'A'; i++; But still I face same problem.
siri
it worked after doing memset on the memory
siri
A: 

I think caf already explained it. Linux is usually configured to allow overcommitting memory. You allocate huge chunks of memory, but internally there happens nothing but just making a note that you process wants this huge chunk of memory. It's not before you try to write that chunk, that the kernel tries to find free virtual memory to satisfy the read/write access. This is a bit like flight booking: Airlines usually overbook the flights, because there's always a percentage of passengers who do not show up.

You can force the memory to be committed by writing to the chunk with memset() after allocation. calloc should work too.

Luther Blissett
+1  A: 

What is happening is that malloc requests argmem * 256 pages from the heap (assuming a 4 Kbyte page size). The heap in turn requests the memory from the operating system. However, all that does is create entries in the page table for the newly allocated memory block. No actual physical RAM is allocated to the process, except that required by the heap to track the malloc request.

As soon as the process tries to access one of those pages by reading or writing, a page fault is generated because the entry in the page table is effectively a dangling pointer. The operating system will then allocate a physical page to the process. It's only then that you'll see the available physical memory go down.

Since all new pages start completely zeroed out, Linux might employ a "copy on write" strategy to optimise page allocation. i.e. it might keep a single page totally zeroed and always allocate that one when a process tries to read from a previously unused page. Only when the process tries to write to that new page would it actually allocate a completely fresh page from physical RAM. I don't know if Linux actually does this, but if it does, merely reading from a new page is not going to be enough to increase physical memory usage.

So, your best strategy is to allocate your large block of RAM and then write something at 4096 byte intervals throughout it.

JeremyP