To do this (and I've had to do this with frequent >50mb allocations), call:
myObj = null;
GC.Collect();
GC.WaitForPendingFinalizers();
I've noticed that the memory footprint of the app will then greatly diminish. In theory, you shouldn't need to do this. However, in practice, with a 32 bit windows OS you might get 2 contiguous blocks of >300mb free at any given time, and having that space taken up by lots of little allocations or a series of big ones can mean that other large allocations will fail unnecessarily. The garbage collector runs in the background when it can, but if you absolutely must make large allocations right now, that set of lines helps to make that possible for me.
EDIT: From what I put in the comments, for the downvoters.
If you read the entire post about garbage collection by Rico Mariani, you'll note that large, infrequent, non-predictable memory allocations fall into scenario #2. To whit:
Rule #2
Consider calling GC.Collect() if some
non-recurring event has just happened
and this event is highly likely to
have caused a lot of old objects to
die.
A classic example of this is if you're
writing a client application and you
display a very large and complicated
form that has a lot of data associated
with it. Your user has just
interacted with this form potentially
creating some large objects... things
like XML documents, or a large DataSet
or two. When the form closes these
objects are dead and so GC.Collect()
will reclaim the memory associated
with them.
Now why would I suggest this as a
possible time to call the collector?
I mean, my usual advice goes something
like "the collector is self-tuning so
don't mess with it." Why the change
of attitude you might ask?
Well here is a situation where the
collector's tendancy[sic] to try to predict
the future based on the past is likely
to be unsuccessful.
If you make large allocations in your game, you will need to be careful about how memory is handled. The garbage collector works on prediction based on past events, and large blocks of memory on a 32bit machine can be devastating to future allocations if not properly managed. If you haven't done it, don't automatically assume that I'm wrong; if you have done it, I'd welcome an explanation of how to do it properly (ie, how to defrag memory to make sure I can always allocation 50-100mb of memory at a given time).