views:

1622

answers:

10

I have a problem where a couple 3 dimensional arrays allocate a huge amount of memory and the program sometimes needs to replace them with bigger/smaller ones and throws an OutOfMemoryException.

Example: there are 5 allocated 96MB arrays (200x200x200, 12 bytes of data in each entry) and the program needs to replace them with 210x210x210 (111MB). It does it in a manner similar to this:

array1 = new Vector3[210,210,210];

Where array1-array5 are the same fields used previously. This should set the old arrays as candidates for garbage collection but seemingly the GC does not act quickly enough and leaves the old arrays allocated before allocating the new ones - which causes the OOM - whereas if they where freed before the new allocations the space should be enough.

What I'm looking for is a way to do something like this:

GC.Collect(array1) // this would set the reference to null and free the memory
array1 = new Vector3[210,210,210];

I'm not sure if a full garbage collecion would be a good idea since that code may (in some situations) need to be executed fairly often.

Is there a proper way of doing this?

+6  A: 

Forcing a Garbage Collection is not always a good idea (it can actually promote the lifetimes of objects in some situations). If you have to, you would use:

array1 = null;
GC.Collect();
array1 = new Vector3[210,210,210];
Mitch Wheat
No, you should not call GC.Collect at all. The best solution is to just remove the reference before the new array is allocated and let the garbage collector do it's work without interferring.
Guffa
If the runtime cannot find enough memory for the allocation it will trigger a collection, so there is no need for the explicit call to GC.Collect.
Brian Rasmussen
John Doe did ask about *forcing* garbage collection after all.
icelava
@icelava: That's just because he thought that he needed to force the garbage collection, but if you just remove the reference it works just fine without forcing it.
Guffa
Putting the GC.Collect(); does seem to help, altought it still does not solve the problem completely - for some reason the program still crashes when about 1.3GB are allocated (I'm using System.GC.GetTotalMemory( false ); to find the real amount allocated).Just nullifying the references does not help at all.
John Doe
+1  A: 

They might not be getting collected because they're being referenced somewhere you're not expecting.

As a test, try changing your references to WeakReferences instead and see if that resolves your OOM problem. If it doesn't then you're referencing them somewhere else.

Joseph
They are only used in the code that generates them and in the code that uses the data (XNA VertexBuffer, used for rendering a volume) so I don't think anything else references them.I've never used WeakReferences before but from that article I get there is a chance of them getting deallocated during the rest of the execution - and that would create a serious problem for the rest of the code.
John Doe
+2  A: 

If I had to speculate you problem is not really that you are going from Vector3[200,200,200] to a Vector3[210,210,210] but that most likely you have similar previous steps before this one:

 i.e.   
    // first you have
    Vector3[10,10,10];
    // then
    Vector3[20,20,20];
    // then maybe
    Vector3[30,30,30];
    //  .. and so on ..
    //  ...
    // then
    Vector3[200,200,200];
    // and eventually you try
    Vector3[210,210,210] // and you get an OutOfMemoryException..

If that is true, I would suggest a better allocation strategy. Try over allocating - maybe doubling the size every time as opposed to always allocating just the space that you need. Especially if these arrays are ever used by objects that need to pin the buffers (i.e. if that have ties to native code)

So, instead of the above, have something like this:

 // first start with an arbitrary size
 Vector3[64,64,64];
 // then double that
 Vector3[128,128,128];
 // and then.. so in thee steps you go to where otherwise 
 // it would have taken you 20..
 Vector3[256,256,256];
Miky Dinescu
Note that some of the built in collection classes will do this for you.
Brian
I agree, constantly reallocating arrays like that is just begging for OOM errors.
Christian Hayter
Actually the "resizing" of the arrays is triggered by user input and is not predictable (altough I'm manually increasing it in steps to test).Overallocating is not an option, that would in fact make it crash earlier, beacuse the total size increases in a cubic manner.
John Doe
A: 

An OutOfMemory exception internally triggers a GC cycle automatically once and attempts the allocation again before actually throwing the exception to your code. The only way you could be having OutOfMemory exceptions is if you're holding references to too much memory. Clear the references as soon as you can by assigning them null.

+7  A: 

This is not an exact answer to the original question, "how to force GC', yet, I think it will help you to reexamine your issue.

After seeing your comment,

  • Putting the GC.Collect(); does seem to help, altought it still does not solve the problem completely - for some reason the program still crashes when about 1.3GB are allocated (I'm using System.GC.GetTotalMemory( false ); to find the real amount allocated).

I will suspect you may have memory fragmentation. If the object is large (85000 bytes under .net 2.0 CLR if I remember correctly, I do not know whether it has been changed or not), the object will be allocated in a special heap, Large Object Heap (LOH). GC does reclaim the memory being used by unreachable objects in LOH, yet, it does not perform compaction, in LOH as it does to other heaps (gen0, gen1, and gen2), due to performance.

If you do frequently allocate and deallocate large objects, it will make LOH fragmented and even though you have more free memory in total than what you need, you may not have a contiguous memory space anymore, hence, will get OutOfMemory exception.

I can think two workarounds at this moment.

  1. Move to 64-bit machine/OS and take advantage of it :) (Easiest, but possibly hardest as well depending on your resource constraints)
  2. If you cannot do #1, then try to allocate a huge chuck of memory first and use them (it may require to write some helper class to manipulate a smaller array, which in fact resides in a larger array) to avoid fragmentation. This may help a little bit, yet, it may not completely solve the issue and you may have to deal with the complexity.
Chansik Im
Or switch to using a jagged array - Vector3[210][210][210], which would split up the allocations and not use one great big block of memory
thecoop
Seems like the problem is indeed caused by this fragmentation issue.Unfortunately I can't port the program to 64 bit because XNA only supports 32 bit. Allocating a big array on startup would probably not be worth the extra memory on the small cases (under 100x100x100) and the extra work to get it working well.I did the math and even if I could use the full 2GB the improvement on maximum volume size would not be big enough (from about 200^3 to 300^3) to go thru the trouble of finding/implementing a hack to make it work, at least not for now.Thanks to all who tried to help!
John Doe
+3  A: 

Isn't this just large object heap fragmentation? Objects > 85,000 bytes are allocated on the large object heap. The GC frees up space in this heap but never compacts the remaining objects. This can result in insufficent contiguous memory to successfully allocate a large object.

Alan.

A: 

Part of the problem may be that you're allocating a multidimensional array, which is represented as a single contiguous block of memory on the large object heap (more details here). This can block other allocations as there isn't a free contiguous block to use, even if there is still some free space somewhere, hence the OOM.

Try allocating it as a jagged array - Vector3[210][210][210] - which spreads the arrays around memory rather than as a single block, and see if that improves matters

thecoop
A: 

I understand what you're trying to do and pushing for immediate garbage collection is probably not the right approach (since the GC is subtle in its ways and quick to anger).

That said, if you want that functionality, why not create it?

public static void Collect(ref object o)
{
    o = null;
    GC.Collect();
}
plinth
That doesn't do a thing. You are making a copy of the reference to object o point to null, which does nothing to the original object o.
devoured elysium
You understand that if I side-effect a ref variable, it changes the caller's copy too, right?
plinth
Have a +1 on me, this code will indeed set the original variable to point to null since it is modifying the original variable's pointer by reference.
Christian Hayter
+3  A: 

Seems you've run into LOH (Large object heap) fragmentation issue.

Large Object Heap

CLR Inside Out Large Object Heap Uncovered

You can check to see if you're having loh fragmentation issues using SOS

Check this question for an example of how to use SOS to inspect the loh.

Pop Catalin
A: 

John, Creating objects > 85000 bytes will make the object end up in the large object heap. The large object heap is never compacted, instead the free space is reused again. This means that if you are allocating larger arrays every time, you can end up in situations where LOH is fragmented, hence the OOM.

you can verify this is the case by breaking with the debugger at the point of OOM and getting a dump, submitting this dump to MS through a connect bug (http://connect.microsoft.com) would be a great start.

What I can assure you is that the GC will do the right thing trying to satisfy you allocation request, this includes kicking off a GC to clean the old garbage to satisfy the new allocation requests.

I don't know what is the policy of sharing out memory dumps on Stackoverflow, but I would be happy to take a look to understand your problem more.

mfawzymkh