views:

1510

answers:

7

Hello,

I am working on a web app using C# and asp.net I have been receiving an out of memory exception. What the app does is read a bunch of records(products) from a data source, could be hundreds/thousands, processes those records through settings in a wizard and then updates a different data source with the processes product information. Although there are multiple DB classes, right now all the logic is in one big class. The only reason for this, is all the information has to do with one thing, a product. Would it help the memory if I divided my app into different classes?? I don't think it would because if I divided the business logic into two classes, both of the classes would remain alive the entire time sending messages to each other, and so I don't know how this would help. I guess my other solution would be to find out what's sucking up all the memory. Is there a good tool you could recommend??

Thanks

+1  A: 

Start with Perfmon; There is a number of counters for GC related info. More than likely you are leaking memory(otherwise the GC would be deleting objects), meaning you are still referencing data structures that are no longer needed.

Matt Davison
+1  A: 

You should split into multiple classes anyways, just for the sake of a sane design.

Are you closing your DB connections? If you are reading into files, are you closing/releasing them once you are done reading/writing? Same goes for other objects.

You could cycle your class objects routinely just to release memory.

Mostlyharmless
+5  A: 

Are you using datareaders to stream through your data? (to avoid loading too much into memory)

My gut is telling me this is a trivial issue to fix, don't pump datatables with 1 million records, work through tables one row at a time, or in small batches ... Release and dispose objects when you are done with them. (Example: don't have static List<Customer> allCustomers = AllCustomers())
Have a development rule that ensures no one reads tables into memory if there are more than X amount of rows involved.

If you need a tool to debug this look at .net memory profiler or windbg with the sos extension both will allow you to sniff through your your managed heaps.

Another note is, if you care about maintainability and would like to reduce your defect count, get rid of the SuperDuperDoEverything class and model information correctly in a way that is better aligned with your domain. The SuperDuperDoEverything class is a bomb waiting to explode.

Sam Saffron
+1  A: 

a very basic thing you might want to try is, restart visual studio (assuming you are using it) and see if the same thing happens, and yes releasing objects without waiting for garbage collector is always a good practice.

to sum it up,

  • release objects
  • close connections

and you can always try this, http://msdn.microsoft.com/en-us/magazine/cc337887.aspx

z3r0 c001
+4  A: 

Also note that you may not actually be running out of memory. What happens is that .NET goes to look for contiguous blocks of memory, and if it doesn't find any, it throws an OOM - even if you have plenty of total memory to cover the request.

Someone referenced both Perfmon and WinDBG. You could also setup adplus to capture a memory dump on crash - I believe the syntax is adplus -crash -iis. Once you have the memory dump, you can do something like:

.symfix C:\symbols
.reload
.loadby sos mscorwks
!dumpheap -stat

And that will give you an idea for what your high-memory objects are.

And of course, check out Tess Fernandez's excellent blog, for example this article on Memory Leaks with XML Serializers and how to troubleshoot them.

If you are able to repro this in your dev environment, and you have VS Team Edition for Developers, there are memory profilers built right in. Just launch a new performance session, and run your app. It will spit out a nice report of what's hanging around.

Finally, make sure your objects don't define a destructor. This isn't C++, and there's nothing deterministic about it, other than it guarantees your object will survive a round of Garbage Collection since it has to be placed in the finalizer queue, and then cleaned up the next round.

Cory Foy
+1  A: 

While a lot of the advice given here is good, sound advice, buying more memory may be an easier/cheaper alternative, especially if

  1. The number of records will not grow by much in the foreseeable future
  2. You're only running out of memory on fringe cases (e.g. case has 1 million records, which is the one that fails, while every other case is less than 500,000 and they all get processed without issues).
  3. Your server has (relatively) little memory to begin with.
Giovanni Galbo
+2  A: 

I found the problem. While doing my loop I had a collection that wasn't being cleared and so data just keep being added to it.

jumbojs