views:

357

answers:

6
int *p;
while(true)
{
 p = new int;
}

Due to running out of memory space, shouldn't this code crash. I have tried printing out the value of p, that is the address of memory located for p, and it seems to increase yet there is no crashing.

Why is this so?

+3  A: 

You did not wait long enough.

[If you want to see how it's progressing, add some output on every 1000000 loops, or something like that. It will fail eventually.]

Steve Townsend
I actually tried outputting the value of p, that is the address of memory p points to and I found it to increase.
Gunner
@Gunner - yes, I had in mind a coarser grain metric so you can see roughly how long it's taking to grow. Once you hit the pagefile, it will slow right on down. @Steve Jessop's suggestion in comments to use a process monitor is a good bet for this though.
Steve Townsend
+6  A: 

Sseveral allocations of small chunks of memory is slower than one big allocation. For instance, it will take more time to allocate 4 bytes 1 million times, than 1 million bytes 4 times.

Try to allocate bigger chunks of memory at once:

int *p;
while(true)
{
 p = new int[1024*1024];
}
Samuel_xL
That's a little misleading. Memory allocation of small objects is no slower than large objects. It just takes more iterations to run out of memory using small allocations.
Ferruccio
@Ferruccio Indeed. I'll try to rephrase
Samuel_xL
+8  A: 

This solution is like trying to crash a car at a telephone pole down the street while driving 1MPH. It will happen eventually but if you want fast results you need to up the speed a bit.

int *p;
while (true) { 
  p = new int[1024*1024*1024];
}

My answer though is predicated on your code base using the standard STL allocator which throws on a failed memory allocation. There are other available allocators which simply return NULL on a failed allocation. That type of allocator will never crash in this code as the failed allocation is not considered fatal. You'd have to check the return of the allocation for NULL in order to detect the error.

One other caveat, if you're running on a 64 bit system this could take considerably longer to crash due to the increased size of the address space. Instead of the telephone poll being down the street, it's across the country.

JaredPar
I get the picture, but even as I type this comment, there is no crash.
Gunner
@Gunner, that's very odd. Are you using the standard STL throwing allocator?
JaredPar
@JaredPar good question, I mentioned that in the comments above, if you are using the nothrow version of new then you need to check when new returns null, at that point throw your own exception. The nothrow version of new will return a null pointer rather than throwing an exception.
pstrjds
You may speed it up on 64 bit systems with virtual memory overcommit by going through each byte you allocate and writing to it, to make sure it isn't using a copy on write page of zero's. ie. limited by physical RAM not by address space.
Greg Rogers
Thanks for your replies. The program actually caused a hang. I got more than I bargained for. Larger amount of allocation at a time did the trick. I wouldn't advice anyone to try it out on Windows XP, I had to reboot :).
Gunner
+8  A: 

Looks like you won't close this question until you see a crash :)

On Unix like operating system you can restrict the amount of virtual memory available to a process using the ulimit command. By setting the VM to 1MB I was able to see the desired results in about 5 seconds:

$ ulimit -v $((1024*1024)) # set max VM available to process to 1 MB

$ ulimit -v                # check it.
1048576

$ time ./a.out             # time your executable.
terminate called after throwing an instance of 'St9bad_alloc'
  what():  std::bad_alloc
Aborted

real    0m5.502s
user    0m4.240s
sys  0m1.108s
codaddict