views:

546

answers:

8

My huge 32-bit web services LLBLGen-based data access application is running alone on a dedicated 64-bit machine. Its physical memory consumption steadily grows up to approximately 2GB when the process releases almost all of the allocated space (up to 1,5GB) and continues to grow from that point again. There is no observable increase in Page Input values or other page file usage parameters so it looks like the memory is released rather than being swapped out to page file. I am thinking what kind of profile is this? There is nothing to actually prevent the process from grabbing all memory it can, on the other hand there are unacceptable http internal errors around the memory release - probably the clean-up blocks useful work. What would be a good strategy to make the cleanup less obtrusive, given the above is an acceptable behaviour in the first place.

A: 

The Garbage Collector doesn't automatically free memory when it releases objects, it holds on to that memory to help minimise the expense of future mallocs.

When a low memory condition is triggered that memory will be returned to the OS and you will see more available memory when looking through task manager. This will normally happen about the 2GB mark, or 3GB if you use the relevant switch.

<contentious>

By setting objects to null when they are dead you can encourage the GC to reuse the memory consumed by those objects, this limiting the growing consumption of memory.

But which objects should you set to null? Big objects, large collections, frequently created objects.

</contentious>

EDIT: There is evidence to support the value of setting objects to null. See this for detail. Of course there is no need to set objects to null, the point is does it help memory management in any way?

EDIT: We need a recent benchmark if such a thing exists rather than continuing to opine.

Ed Guiness
+1  A: 

Is it possible you are not disposing of various disposable objects (particular DB related). This would leave them around, potentially tying up large amounts of unmanaged resources until the GC runs and their finalizers are called.

It would be worth running perfmon against you process and looking to see if there is a steady growth in some critical resource, like handles, or if your DB provider exposes performance counters then connections or open result sets.

Rob Walker
+1  A: 

I agree with the first part of edg's answer, but where he says:

"By setting objects to null when they are dead you can encourage the GC to reuse the memory consumed by those objects, this limiting the growing consumption of memory."

is incorrect. You never need to set an object to null since the GC will eventually collect your object after it goes out of scope.

This was discussed in this answer on SO: http://stackoverflow.com/questions/2785/setting-objects-to-nullnothing-after-use-in-dot-net

Mitch Wheat
THe question is whether setting objects to null affects the application's memory footprint, contrasted with GC as a different thing.
Ed Guiness
A: 

Don't user Arraylists (garbage collect don't work weel with them), use instead generic lists

Other common error is to have in web.config Debug=true, this consume lot of memory, change the option to "false".

Other thing to do is use CLRProfiler, to trace the problem.

Good Luck, Pedro

Pedro
+2  A: 

It sounds like you have a memory leak, the process keeps leaking memory until it crushes with an out-of-memory condition and is then automatically restarted by the server.

1.5GB is about the maximum amount of memory a 32 bit process can allocate before running out of address space.

Somethings to look for:

  • Do you do your own caching? when are items removed from the cache?
  • Is there somewhere data is added to a collection every once in a while but never removed?
  • Do you call Dispose on every object that implements IDisposable?
  • Do you access any non-managed code at all (COM objects or using DllImport) or allocate non-managed memory (using the Marshal class for example)? anything that is allocated there is never freed by the garbage collector, you have to free it yourself.
  • Do you use 3rd party libraries or any code from 3rd parties? it can have any of the problems in the list too.
Nir
Just to add a little bit of info, check out the Web Garden setting and potentially increase the number of processes with the ability to handle this app.
Chris Lively
A: 

Ensure that you aren't putting up a debug build of your project. There's a feature* that when you have a debug build, if you instantiate any object that contains the definition for an event, even if you don't raise the event, it will hold only a small piece of memory indefinitely. Over time, these small pieces of memory will eat away at your memory pool, until it eventually restarts the web process, and start again.

*I call this a feature (and not a bug) because it's been around since the beginning of .Net 2 (not present in .Net 1.1), and there's been no patch to fix it. The memory leak must be due to some feature needed when debugging.

Kibbee
A: 

We were having similar situations occur and altered all our database connections to use a try/catch/finally approach. Try was used to execute code, catch for error collection, and finally to close all variables and database connections.

internal BECollection<ReportEntity> GetSomeReport()
    {
        Database db = DatabaseFactory.CreateDatabase();
        BECollection<ReportEntity> _ind = new BECollection<ReportEntity>();
        System.Data.Common.DbCommand dbc = db.GetStoredProcCommand("storedprocedure");

        try
        {
            SqlDataReader reader = (SqlDataReader)db.ExecuteReader(dbc);
            while (reader.Read())
            {
                //populate entity
            }
        }
        catch (Exception ex)
        {
            Logging.LogMe(ex.Message.ToString(), "Error on SomeLayer/SomeReport", 1, 1);
            return null;
        }
        finally
        {
            dbc.Connection.Close();
            _ind = null;
        }
        return _ind;
    }
CYBRFRK
A: 

My first guess would be a memory leak. My second guess would be that it is normal behavior - the GC won't be fired until you have significant memory pressure. The only way to be sure is to use a combination of a profiler and things like PerfMon. Some sites:

In addition I would make sure you aren't running in Debug mode (as already mentioned).

As far as the HTTP errors - assuming you are running in server GC mode, it tries to do everything it can to not block requests. It would be interesting to find out what those HTTP errors are - that's not normal behavior from what I've seen in the past, and might point to some more of the root of your issue.

Cory Foy