views:

778

answers:

9

I have a C/C++ program that might be hanging when it runs out of memory. We discovered this by running many copies at the same time. I want to debug the program without completely destroying performance on the development machine. Is there a way to limit the memory available so that a new or malloc will return a NULL pointer after, say, 500K of memory has been requested?

+8  A: 

One way is to write a wrapper around malloc().

static unsigned int requested =0;

void* my_malloc(size_tamount){

   if (requested + amount < LIMIT){
       requested+=amount;
       return malloc(amount);
   }

   return NULL
}

Your could use a #define to overload your malloc.

As GMan states, you could overload new / delete operators as well (for the C++ case).

Not sure if that's the best way, or what you are looking for

Tom
Better would be to over load global operator new/delete, because all allocations will have to go through that, without changing any other code.
GMan
Yes, overloading new / delete will help. Consider this a malloc overload. Editing my answer
Tom
And you might consider making LIMIT settable at run-time, eg via an environment variable.
William Pursell
+6  A: 
  • Which OS? For Unix, see ulimit -d/limit datasize depending on your shell (sh/csh).

  • You can write a wrapper for malloc which returns an error in the circonstance you want. Depending on your OS, you may be able to substitute it for the implementation's one.

AProgrammer
+26  A: 

Try turning the question on its head and asking how to limit the amount of memory an OS will allow your process to use.

Try looking into http://ss64.com/bash/ulimit.html

Try say: ulimit -v

Here is another link that's a little old but gives a little more back ground: http://www.network-theory.co.uk/docs/gccintro/gccintro_77.html

chollida
This worked for me. Thanks! Specifically, I ran the program, used `ps` to get the process ID, then `cat /proc/PID/status` to get VmPeak and VmSize in kB (817756 in my case). I then ran `ulimit -v 800000` and tried again, and quickly got into an out-of-memory situation (0 returned from a malloc). I could also run it under gdb (`gdb --args ./program --arg1 --arg2`) and trace the code.
jwhitlock
Thanks for showing how you ended up using it.
chollida
+3  A: 

That depends on your platform. For example, this can be achieved programmatically on Unix-like platforms using setrlimit(RLIMIT_DATA, ...).

EDIT:

The RLIMIT_AS resource may also be useful in this case as well.

Void
Against GNU libc, RLIMIT_DATA is powerless.
Norman Ramsey
+3  A: 

Override new and new[].

void* operator new(size_t s)
{
}
void* operator new[](size_t s)
{
}

Put your own code in the braces to selectively die after X number of calls to new. Normally you would call malloc to allocate the memory and return it.

Robert
+1  A: 

I once had a student in CS 1 (in C, yeah, yeah, not my fault) try this, and ran out of memory:

int array[42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42]..... (42 dimensions);

and then he wanted to know why it gave errors...

Brian Postow
No wonder... He was trying to create an array of size `1.5013093754529657235677197216425e+68`
RCIX
+1  A: 

If you want to spend money, there's a tool called Holodeck by SecurityInnovations, which lets you inject faults into your program (including low memory). Nice thing is you can turn stuff on and off at will. I haven't really used it, much, so I don't know if it's possible to program in faults at certain points with the tool. I also don't know what platforms are supported...

atk
+1  A: 

As far as I know, on Linux, malloc will never return a null pointer. Instead, the OOM Killer will get called. This is, of course, unless you've disabled the OOM Killer. Some googling should come up with a result.

I know this isn't your actual question, but it does have to do with where you're coming from.

sharth
Sometimes the OOM gets called, sometimes you get a NULL pointer: http://linuxdevcenter.com/pub/a/linux/2006/11/30/linux-out-of-memory.html?page=1
jwhitlock
+3  A: 

An other way of doing it is to use failmalloc which is a shared library that overrides malloc etc. and then fail :-). It gives you control over when to fail and can be made to fail randomly, every nth time etc.

I havent used it my self but have heard good things.

Mr Shark