+2  A: 

You could try pooling and managing the large objects yourself. For example, if you often need <500k arrays and the number of arrays alive at once is well understood, you could avoid deallocating them ever--that way if you only need, say, 10 of them at a time, you could suffer a fixed 5mb memory overhead instead of troublesome long-term fragmentation.

As for your three questions:

  1. Is just not possible. Only the garbage collector decides when to finalize managed objects and release their memory. That's part of what makes them managed objects.

  2. This is possible if you manage your own heap in unsafe code and bypass the large object heap entirely. You will end up doing a lot of work and suffering a lot of inconvenience if you go down this road. I doubt that it's worth it for you.

  3. It's the size of the object, not the number of elements in the array.

Remember, fragmentation only happens when objects are freed, not when they're allocated. If fragmentation is indeed your problem, reusing the large objects will help. Focus on creating less garbage (especially large garbage) over the lifetime of the app instead of trying to deal with the nuts and bolts of the gc implementation directly.

blucz
Good idea, but understand that this is enterprise level app. Please understand the magnitude and size of the application. Hence, I cannot go on such huge design changes. Any other ideas abooot optimization? :)
Nayan
Besides, if not using unmanaged code, how can you use memory pool in managed code? Any good reads?
Nayan
"Magnitude and size" is of course the root of the problem. The only magic available here is a 64-bit operating system. Two hundred bucks to solve your problem.
Hans Passant
lol.. good advice Hans! :) But I doubt my company will agree to it. This is normal problem with such widely popular poorly designed enterprise products =D
Nayan
When I suggested that you pool and manage your objects, I was suggesting that you reuse managed objects...you don't need to involve unmanaged code to hang onto a list of arrays and reuse them.
blucz
Ok Blucz, but do you have any good examples for implementing such things? Like, managed memory pool? Any good document? Thanks!
Nayan
+2  A: 

Nayan, here are the answers to your questions, and a couple of additional advices.

  1. You cannot free them, you can only make them easier to be collected by GC. Seems you already know the way:the key is reducing the number of references to the object.
  2. Fragmentation is one more thing which you cannot control. But there are several factors which can influence this:
    • LOH external fragmentation is less dangerous than Gen2 external fragmentation, 'cause LOH is not compacted. The free slots of LOH can be reused instead.
    • If the 500Kb byte arrays are referring to are used as some IO buffers (e.g. passed to some socket-based API or unmanaged code), there are high chances that they will get pinned. A pinned object cannot be compacted by GC, and they are one of the most frequent reasons of heap fragmentation.
    • 85K is a limit for an object size. But remember, System.Array instance is an object too, so all your 500K byte[] are in LOH.
    • All counters that are in your post can give a hint about changes in memory consumption, but in your case I would select BIAH (Bytes in all heaps) and LOH size as primary indicators. BIAH show the total size of all managed heaps (Gen1 + Gen2 + LOH, to be precise, no Gen0 - but who cares about Gen0, right? :) ), and LOH is the heap where all large byte[] are placed.

Advices:

  • Something that already has been proposed: pre-allocate and pool your buffers.

  • A different approach which can be effective if you can use any collection instead of contigous array of bytes (this is not the case if the buffers are used in IO): implement a custom collection which internally will be composed of many smaller-sized arrays. This is something similar to std::deque from C++ STL library. Since each individual array will be smaller than 85K, the whole collection won't get in LOH. The advantage you can get with this approach is the following: LOH is only collected when a full GC happens. If the byte[] in your application are not long-lived, and (if they were smaller in size) would get in Gen0 or Gen1 before being collected, this would make memory management for GC much easier, since Gen2 collection is much more heavyweight.

  • An advice on the testing & monitoring approach: in my experience, the GC behavior, memory footprint and other memory-related stuff need to be monitored for quite a long time to get some valid and stable data. So each time you change something in the code, have a long enough test with monitoring the memory performance counters to see the impact of the change.

  • I would also recommend to take a look at % Time in GC counter, as it can be a good indicator of the effectiveness of memory management. The larger this value is, the more time your application spends on GC routines instead of processing the requests from users or doing other 'useful' operations. I cannot give advices for what absolute values of this counter indicate an issue, but I can share my experience for your reference: for the application I am working on, we usually treat % Time in GC higher than 20% as an issue.

Also, it would be useful if you shared some values of memory-related perf counters of your application: Private bytes and Working set of the process, BIAH, Total committed bytes, LOH size, Gen0, Gen1, Gen2 size, # of Gen0, Gen1, Gen2 collections, % Time in GC. This would help better understand your issue.

Alexey Nedilko
Alexey, you have really helped me! This is one of the rare moments when someone honestly answered and tried to help, not unnecessarily preached. Thank you so much for the tips. And, I will come back with some counter statistics to share the exact nature of the problem! Regards!
Nayan
Please clear... you said, "The free slots of LOH can be reused instead" - by GC or by programmer?
Nayan
Can you answer this (this puzzles me) - is higher number of BIAH after each test run better or worse?
Nayan
- It's GC who can reuse free space inside LOH to allocate a new object. Some more LOH reading: http://msdn.microsoft.com/en-us/magazine/cc534993.aspx
Alexey Nedilko
- For BIAH size, there's no general answer to whether smaller is better (each application has its own memory usage pattern), but in your case of OOM conditions smaller heaps (and thus smaller BIAH) should be your goal.
Alexey Nedilko
Thanks much again!
Nayan
A: 

Hello Nayan,

I'm also facing similar issue. In my case the application runs for approximately 5 days and suddenly the OOM occurs. I had put some perfmon counters and the summary is as follows.

  • No of Private Bytes is increasing
  • No of Bytes in all Heaps is increasing
  • Size of Gen 1 Heap is increasing
  • Size of Gen 2 Heap is increasing
  • size of LOH is constant.

I'm not able to derive at any conclusion. The counter are captured at an interval of 5 secs. The Mem Profiler shows a lot of live instances of String type. Could you pls guide me how to proceed further.

thanks Nagesh

Nagesh
Please see my question in new edited section for my tips.
Nayan