Although I do understand the serious implications of playing with this function (or at least that's what I think), I fail to see why it's becoming one of these things that respectable programmers wouldn't ever use, even those who don't even know what it is for.
Let's say I'm developing an application where memory usage varies extremely depending on what the user is doing. The application life cycle can be divided into two main stages: editing and real-time processing. During the editing stage, suppose that billions or even trillions of objects are created; some of them small and some of them not, some may have finalizers and some may not, and suppose their lifetimes vary from a very few milliseconds to long hours. Next, the user decides to switch to the real-time stage. At this point, suppose that performance plays a fundamental role and the slightest alteration in the program's flow could bring catastrophic consequences. Object creation is then reduced to the minimum possible by using object pools and the such but then, the GC chimes in unexpectedly and throws it all away, and someone dies.
The question: In this case, wouldn't it be wise to call GC.Collect() before entering the second stage?
After all, these two stages never overlap in time with each other and all the optimization and statistics the GC could have gathered would be of little use here...
Note: As some of you have pointed out, .NET might not be the best platform for an application like this, but that's beyond the scope of this question. The intent is to clarify whether a GC.Collect() call can improve an application's overall behaviour/performance or not. We all agree that the circumstances under which you would do such a thing are extremely rare but then again, the GC tries to guess and does it perfectly well most of the time, but it's still about guessing.
Thanks.