My first concern would be to ensure that you are measuring something relevant. "Memory" can mean a lot of different things. There is an enormous difference between running out of virtual memory space and running out of RAM. There is an enormous difference between a performance problem caused by thrashing the page file and a performance problem caused by creating too much GC pressure.
If you don't understand what the relationships are between RAM, virtual memory, working set and the page file then start by doing some reading until you understand all that stuff. The way you phrased the question leads me to suspect that you believe that virtual memory and RAM are the same thing. They certainly are not.
I suspect that the arithmetic you are doing is:
- I have eight processes that each consume 500 million bytes of virtual address space
- I have four billion bytes of RAM
- Therefore I am about to get an OutOfMemory exception
That syllogism is completely invalid. That's the syllogism:
- I have eight quarts of ice cream
- I have room for nine quarts of ice cream in the freezer
- Therefore if I get two more quarts of ice cream, something is going to melt
when in fact you have an entire warehouse-sized cold storage facility next door. Remember, RAM is just a convenient fast way to store stuff near where you need it, like your fridge. If you have more stuff that needs to be stored, who cares if you run out of room locally? You can always pop next door and put the stuff you use less frequently in long term deep freeze -- the page file. That's less convenient, but nothing melts.
You get an "out of memory" exception when a process runs out of virtual address space, not when all the RAM in the system is consumed. When all the RAM in the system is consumed, you don't get an error, you get crap performance because the operating system is spending all of its time running stuff back and forth from disk.
So, anyway, start by understanding what you are measuring and how memory in Windows works. What you should actually be looking for is:
Is any process in danger of using more than two billion bytes of virtual memory on a 32 bit system? A process only gets 2GB of virtual memory (not RAM, remember, virtual memory has nothing to do with RAM: that's why its called "virtual" -- it isn't hardware) on win32 that is addressible by user code; you'll get an OOM if you try to use more.
Is any process in danger of attempting to allocate a huge block of virtual memory such that there is no contiguous block of that size free? Are you likely to be allocating ten million bytes of data in a single array, for example? Again, OOM.
Is the working set -- that is, the virtual memory pages of a process that are *required to be in RAM for performance reasons -- of all processes smaller than the amount of RAM available? If not, then soon you'll get thrashing, but not an OOM.
Is your page file big enough to handle the virtual memory pages that could be paged out to disk if RAM starts to get short?
So far none of this has anything to do with .NET. Once you've actually determined that there is a real problem - there might not be - then start investigating based on what the real problem is. Use a memory profiler to examine what the memory allocator and garbage collector are doing. See if there are huge blocks in the large object heap, or unexpectedly big graphs of live objects that cannot be collected, or what. But use good engineering principles: understand the system, use tools to investigate the actual empirical performance, experiment with changes and carefully measure their results. Don't just start randomly slapping magic IDisposable interfaces on a few classes and hope that doing so makes the problem -- if there is one -- go away.