views:

152

answers:

4

OK here's the deal. There are some people who put their lives in the hands of .NET's garbage collector and some who simply wont trust it.

I am one of those who partially trusts it, as long as it's not extremely performance critical (I know I know.. performance critical + .net not the favored combination), in which case I prefer to manually dispose of my objects and resources.

What I am asking is if there are any facts as to how efficient or inefficient performance-wise the garbage collector really is?

Please don't share any personal opinions or likely-assumptions-based-on-experience, I want unbiased facts. I also don't want any pro/con discussions because it won't answer the question.

Thanks

Edit: To clarify, I'm basically saying: No matter what application we write, resource critical or not can we just forget about everything and let the GC handle it or can't we?

I'm trying to get an answer on in reality what the GC does and doesn't and where it might fail where manual memory management would success IF there are such scenarios. Does it have LIMITATIONS? I don't know how I could possibly explain my question further.

I don't have any issues with any application it's a theoretical question.

+6  A: 

Is efficient enough for most applications. But you don't have to live in fear of GC. On really hot systems, low latency requirements, you should program in a fashion that completely avoids it. I suggest you look at this Rapid Addition White Paper:

Although GC is performed quite rapidly, it does take time to perform, and thus garbage collection in your continuous operating mode can introduce both undesirable latency and variation in latency in those applications which are highly sensitive to delay. As an illustration, if you are processing 100,000 messages per second and each message uses a small temporary 2 character string, around 8 bytes (this a function of string encoding and the implementation of the string object) is allocated for each message. Thus you are creating almost 1MB of garbage per second. For a system which may need to deliver constant performance over a 16 hour period this means that you will have to clean up 16 hours x 60 minutes x 60 seconds x 1MB of memory approximately 56 GB of memory. The best you can expect from the garbage collector is that it will clean this up entirely in either Generation 0 or 1 collections and cause jitter, the worst is that it will cause a Generation 2 garbage collection with the associated larger latency spike.

But be warned, pulling off such tricks as avoiding GC impact is really hard. You really need to ponder whether you are at that point in your perf requirements where you need to consider the impact of GC.

Remus Rusanu
Thanks a ton for that information, really appreciate it.
Jonas B
That was some interesting reading and answered many of my questions, thanks again.
Jonas B
+1  A: 

Any GC algorithm will favor certain activity (ie:optimization). You will have to test the GC against your usage pattern to see how efficient it is for you. Even if someone else studied particular behavior of the .net GC and produced "facts" and "numbers", your results could be wildly different.

I think the only reasonable answer to this question is anecdotal. Most people don't have a problem with GC efficiency, even in large-scale situations. It is considered at least as efficient or more efficient than the GC's of other managed languages. If you are still concerned, you probably should not be using a managed langauge.

David
Thanks for the reply, it's not really a concern I just think it's something you should have a some knowledge of, I don't that is why I am asking.
Jonas B
+1  A: 

You do not need to worry about this.

The reason is that if you ever find an edge case where the GC is taking up a significant amount of time, you will then be able to deal with it by making spot optimisations. This won't be the end of the world - it will probably be pretty easy.

And you are unlikely to find such edge cases. It really performs amazingly well. If you've only experienced heap allocators in typical C and C++ implementations, the .NET GC is a completely different animal. I was so amazed by it I wrote this blog post to try and get the point across.

Daniel Earwicker
+2  A: 

You cannot always forget about memory allocation, regardless of whether you use a GC or not. What a good GC implementation buys you is that most of the time you can afford not to think about memory allocation. However there is no ultimate memory allocator. For something critical, you have to be aware of how memory is managed, and this implies knowing how things are done internally. This is true for GC and for manual heap allocation alike.

There are some GC which offer real-time guarantees. "Real-time" does not mean "fast", it means that the allocator response time can be bounded. This is the kind of guarantee that is needed for embedded systems such as those which drive electric commands in a plane. Strangely enough, it is easier to have real-time guarantees with garbage collectors than with manual allocators.

The GC in the current .NET implementations are not real-time; they are heuristically efficient and fast. Note that the same can be said about manual allocation with malloc() in C (or new in C++) so if you are after real-time guarantees you already need to use something special. If you do not, then I do not want you to design the embedded electronics for the cars and planes I use !

Thomas Pornin