views:

154

answers:

3

In my code, I perform a large number of tasks, each requiring a large array of memory to temporarily store data. I have about 500 tasks. At the beginning of each task, I allocate memory for an array :

double[] tempDoubleArray = new double[M];

M is a large number depending on the precise task, typically around 2000000. Now, I do some complex calculations to fill the array, and in the end I use the array to determine the result of this task. After that, the tempDoubleArray goes out of scope.

Profiling reveals that the calls to construct the arrays are time consuming. So, I decide to try and reuse the array, by making it static and reusing it. It requires some additional juggling to figure out the minimum size of the array, requiring an extra pass through all tasks, but it works. Now, the program is much faster (from 80 sec to 22 sec for execution of all tasks).

double[] tempDoubleArray = staticDoubleArray;

However, I'm a bit in the dark of why precisely this works so well. Id say that in the original code, when the tempDoubleArray goes out of scope, it can be collected, so allocating a new array should not be that hard right?

I ask this because understanding why it works might help me figuring out other ways to achieve the same effect, and because I would like to know in what cases allocation gives performance issues.

+7  A: 

Just because something can be collected doesn't mean that it will. In fact, were the garbage collector as aggressive as that in its collection, your performance would be significantly worse.

Bear in mind that creating an array is not just creating one variable, it's creating N variables (N being the number of elements in the array). Reusing arrays is a good bang-for-your-buck way of increasing performance, though you have to do so carefully.

Adam Robinson
+1  A: 

One answer could be the large object heap - objects greater than 85KB are allocated on a different LOH, that is less frequently collected and not compacted.

See the section on performance implications

  • there is an allocation cost (primarily clearing out the allocated memory)
  • the collection cost (LOH and Gen2 are collected together - causing compaction of large objects in Gen2)
Gishu
A: 

It's not always easy to allocate large blocks of memory in the presence of fragmentation. I can't say for sure, but my guess is that it's having to do some rearranging to get enough contiguous memory for such a big block of memory. As for why allocating subsequent arrays isn't faster, my guess is either that the big block gets fragmented between GC time and the next allocation OR the original block was never GCd to start with.

Hank Gay
Actually due to the use of virtual memory, allocating contigous blocks should not be a problem at all.
apoorv020
I'm not at all clear how virtual memory plays in. Doesn't the CLR maintain its own heap? If so, then OS-level memory paging will probably be nothing more than a source of unpredictable slowdowns.
Hank Gay
They would need to be contiguous in virtual address space, so fragmentation issues can still apply.
Cylon Cat