A lot of useful solutions have already been suggested and the MSDN article is very thorough. In conjunction with the suggestions above I would also do the following;
Correlate the time of the exception with your log file to see what was going on at the time of the OOM exception. If you have little logging at info or debug level I would suggest adding some logging so you have an idea of the context around this error.
Does the memory usage gradually increase over a long period of time before the exception (e.g. a server process that runs indefinitely) or a does it jump up in large increases quite quickly until the exception? Are lots of threads running or just one?
If the first is true and the exception doesn’t occur for a long time it would imply that resources are leaking are leaking as stated above. If the later is true a number of things could contribute to the cause e.g. a loop that allocates a lot of memory per iteration, receiving a very large set of results from a service etc. etc.
Either way the log file should provide you with enough information on where to start. From there I would ensure I could recreate the error either by issuing a certain set of commands in the interface or by using a consistent set of inputs. After that depending on the state of the code I would try (with the use of the log file info) to create some integration tests that targeted the assumed source of the problem. This should allow you to recreate the error condition much faster and make it a lot easier to find as the code you are concentrating on will be a lot smaller.
Other things I tend to do is surround memory sensitive code with a small profiling class. This can log memory usage to the log file and give you immediate visibility of problems in the log. The class can be optimized so it's not compiled into release builds or has a tiny performance overhead (if you need more info contact me). This type of approach doesn't work well when lots of threads are allocating
You mentioned unmanaged resources I assume all the code you / your team has written is managed? If not and if possible I would surround the unmanaged boundaries with a profiling class similar to the one mentioned above to rule out leaks from unmanaged code or interop. Pinning lots of unmanaged pointers can also cause heap fragmentation but if you have no unmanaged code both of these points can be ignored.
Explicitly calling the garbage collector in an earlier comment was discouraged. Although you should rarely do this there are times where it is valid (search Rico Mariani's blog for examples). One example (covered in the blog mentioned) in which I have explicitly called collect is when large amounts of string have been returned from a service, put into a dataset and then bound to a grid. Even after the screen was closed this memory wasn’t collected for some time. In general it shouldn't be called explicitly as the garbage collector maintains metrics on which it bases (among other things) collections on. Calling collect explicitly invalidates these metrics.
Finally it is generally good to have an idea of memory requirements of your application. Either obtain this by logging more information, occasionally running the profiler, stress / unit / integration tests. Get an idea of what impact a certain operation with have at a high level e.g. based on a set of inputs roughly x will be allocated. I gain an understanding of this by logging out detailed information at strategic points in the log file. A bloated log file can be hard to understand or interpret.