views:

66

answers:

5

Is there noticeable performance penalty for allocating LARGE chunks of heap memory in every iteration of loop? Of course I free it at the end of each iteration.

An alternative would be to allocate once before entering the loop, repeatedly using it in all iterations, and eventually freeing it up after exiting the loop. See the code below.

// allocation inside loop
for(int i = 0; i < iter_count; i++) {
    float *array = new float[size]();
    do_something(array);
    delete []array;
}

// allocation outside loop
float *array = new float[size]();
for(int i = 0; i < iter_count; i++) {
    do_something(array);
}
delete []array;
+2  A: 

Never actually know unless you test it how big a hit it is, but if there's no reason to allocate it inside the loop don't. It can be slow to allocate lots of memory, and if you do it enough it will slow down your code.
Same thing can be said for anything inside a loop. If it doesn't need to be there it will run faster if it's taken out (How much faster totally depends on what it is and allocating memory is more demanding than other things), but if it makes the code better/easier it can be worth it to leave it in the loop.

cohensh
Yeah, Aamir should just run a test. He pretty much already wrote the test code above. No need to speculate--just try it. Science!
Erik Hermansen
A: 

The overhead depends on the "weight" of do_something(). As it acts on an array I suppose it's a bit more than a few scalar operations. So in this case you won't notice any speedup by moving the allocation/delete outside of the loop. However, in the case shown above there is little reason not to do so.

FFox
+3  A: 

I would never do it inside the loop. Allocating memory is not a free event, and doing it once is definitely preferred over doing it over and over again. Also you can just allocate the array without the parenthesis, and you should be fine:

float *array = new float[size];
C Johnson
A: 

Moving operations out of loops improves performance. Allocation outside will be faster in particular if iter_count is large.

The new() operator potentially (but not always!) causes an operating system call to get more memory, which is expensive (relatively speaking). Equally the delete() call frees memory potentially (but not always!) causing an operating system call as well.

In all cases make sure that do_something() doesn't make any assumptions about the content of the memory it is not initialized and can contain random data.

John
+1  A: 
  • Even if allocation was constant time, you have TxN instead of T. In addiiton, if you have any memory initialization of the chunk (even if it's just setting to zero), you repeatedly thrash your cache.
  • The major performance hit of heap allocations is fragmentation, not allocation time, which is an accumulative problem. Accumulate less.

  • There are some pathological cases. If there's a lot of short-lived allocation activity that "spans" deallocation and allocation of the chunk (e.g. running the same routine in another thread), you might frequently push the heap manager to require new memory for the big chunk (because it's currently occupied). That will really fragment your cache and increase your working set.

So there's the direct hit, which can be measured directly: how much does new/delete cost compared to do_something()? if do_something is expensive, you might not measure much.

And there's the "heap pressure" which accumulates in a large application. The contribution to that is hard to measure, and you might hit a performance brick wall built by a dozen independent contributors, which are hard to identify after the fact.

peterchen