views:

2059

answers:

6

My application allocates a large amount of memory (millions of small objects totaling several gigabytes) and holds onto it for a long time.

  1. Is .NET wasting time checking through all of this data to do GC on it?
  2. How often does the Gen 2 GC occur (the one that checks all objects)?
  3. Is there any way to reduce it's frequency or temporarily suppress it from occurring?
  4. I know exactly when I am ready for a large amount of memory to be collected, is there any way to optimize for that? I am currently calling GC.Collect(); GC.WaitForPendingFinalizers(); at that time.

Update: Perf counter "% Time in GC" is showing an average of 10.6%.

+13  A: 

Unless you can confirm that the garbage collector is actively slowing the performance of your application, you should not take steps to cripple the functionality of your runtime environment.

Judging from your question, you have not confirmed that the GC is a problem. I severely doubt that it is.

Optimize only what needs to be optimized.

Welbog
Perf counter "% Time in GC" is showing an average of 10.6%.
Fantius
If your application churns on a lot of objects, its expected that your going to have higher GC time. If you wrote the same code in C++, you would have nearly as much time spent allocating, zeroing, and freeing chunks of memory anyway, perhaps more time depending on the efficiency of your memory manager.
jrista
So given that I am allocating millions of the exact same class and then releasing a large portion of them all at once, is there a better pattern to follow than the typical new and discard?
Fantius
You could change the flow of your application so that objects are either destroyed very quickly (in a generation 0 collection) or live for the entire application life so are not collected at all.
No More Hacks
+2  A: 

It will only (usually) happen when the GC needs some gen2 memory anyway (because gen1 is full). Are you asking this speculatively, or do you actually have a problem with GC taking a large proportion of your execution time? If you don't have a problem, I suggest you don't worry about it for the moment - but keep an eye on it with performance monitors.

Jon Skeet
Am I incorrect in thinking that the GC is checking if the 'pointers' are in use (or how often used they are), and is completely unconcerned with the size of each individual object?
As far as I know, yes. On the other hand, if you have fewer large objects, they will take less time to mark/sweep than millions of small ones. I would try to avoid letting this influence your design though.
Jon Skeet
It shouldn't influence your design unless you have a 50/50 decision, in which case you would want to choose the option that is easier to optimize. Assuming readability and other factors are equal. It's always better to have more information, in my opinion.
Also, don't forget that really large objects use the (aptly named) large object heap, which has different collection policy than the regular generational heaps.
Eric Lippert
Ooh yes, had forgotten that part totally :) However remember that this is for really large *individual objects* - not one object which is the sole referencer of lots of objects etc. In my experience only arrays and strings end up on the large object heap - those are the ones to look out for.
Jon Skeet
+4  A: 

Look at the System.Runtime.GCSettings.LatencyMode property.

Setting the GCServer property to true in the app.config will also help cut down on GC's (in my case 10 times less GC when enabled).

leppie
I set that to LowLatency but I did not observe an improvement.
Fantius
+3  A: 

You can stop the garbage collector from finalizing any of your objects using the static method:

GC.SuppressFinalize(*your object*)

More information here: link text

Ken Pespisa
That would invite resource leaks and it doesn't address the question.
Henk Holterman
Yes, that invites resource leaks, but it does in fact address the question somewhat if you know how finalize actually works.
Joshua
I answered the original question - it's changed since. Originally he was asking how to stop the GC from checking his large objects. Whether or not that is a good idea is a different matter.
Ken Pespisa
In reading about this function, it doesn't sound like it stops GC from checking to see if the object is ready to be collected. Instead it sounds like it suppresses the call to Finalize when the object is being collected. Very different.
Fantius
+4  A: 

You can measure this using Performance Monitor. Open perfmon and add the .NET CLR Memory related performance counters. These counters are process specific and with them you can track the number of collections and sizes of the various generations and more spefically for you the "% Time in GC". Here is the explain text for this counter:

% Time in GC is the percentage of elapsed time that was spent in performing a garbage collection (GC) since the last GC cycle. This counter is usually an indicator of the work done by the Garbage Collector on behalf of the application to collect and compact memory. This counter is updated only at the end of every GC and the counter value reflects the last observed value; its not an average.

If you watch these counters while running your program, you should have answer for the frequency and cost of the GC due to your memory decisions.

Here is a good discussion of the various GC Performance Counters. It seems that 10% is borderline okay.

Jack Bolding
A: 

My ASP.NET application - B2B system - used to start at 35-40MB when the first user came to it. After so minutes, the application used to grow up to 180 MB with 2 or 3 users hitting pages. After reading .net development best practices and GC performance guideline I find out that the problem was my application design. I did not agree at once.

I was horrified about how easy we can do mistakes. I gave up many features and start to easy some objects up. Meaning:

  1. Avoid mixing so much pages and intelligent and communicative user controls (the ones with lot of functionalities which actually most exist for each page that uses this control).

  2. Stop engendering universal functionalities on base classes. Sometimes is preferable repeat. Inheritance is cost.

  3. On some complex functionality I put everything on the same function. YES, reaching 100 lines most. When I read this recommendation on .net performance guidance I did not believe it but it works. Call stacks is a problem, use class properties over local variables is a problem. Class level variables can be a hell…

  4. Stop using complex base classes, no base classes with more than 7 lines should exist. If you spread bigger classes on the entire framework, you'll have problem.

  5. I start use more static objects and functionalities. I saw application wich other guy designed. All dataaccess objects methods (insert, update, delete, selects) was static ones. The application with more concurrent users never reaches out more than 45MB.

  6. To save some projects, I like stead state pattern. I learned in the real world but the author Nygard also agree with me on his book: Release IT - Design and Deploy Production-Ready software. He calls such approach as steady state pattern. This patterns says we may need something to free up idle resources.

  7. You may want play with on machine config file. On the attribute memoryLimit you'll indicate the percentage of memory which could be reached before a process recycles.

  8. You may also want play with on machine config file. On this attribute, GC will dictate the machine behaviour (Workstation GC and Server GC). This option may dramatically changes memory consumption behaviour too.

I had lot of success when I started to care about this items. Hope this help.

Eduardo Xavier
My application is not about concurrent users. And I don't think I'm using any inheritance. There are no user controls. Anytime something can be static, I make it static. My process never recycles.
Fantius