views:

52

answers:

2

hello all i have the following piece of code, that im looking forward to optimize, since i'm consuming gobs of memory this routine is heavily used

first optimization would be to move the stringbuilder construction out of the download routine and make it a field of the class, then i would clear it inside the routine

can you please suggest any other optimization or point me in the direction of some resources that could help me with this (web articles, books, etc).

i'm thinking about replacing the stringbuilder by a fixed (much larger) size buffer ... or perhaps create a larger sized stringbuilder

thanks in advance.

   StreamWriter _writer;
   StreamReader _reader;

   public string Download(string msgId)
   {
       _writer.WriteLine("BODY <" + msgId + ">");
       string response = _reader.ReadLine();

       if (!response.StartsWith("222"))
           return null;

       bool done = false;
       StringBuilder body = new StringBuilder(256* 1024);
       do
       {
           response = _reader.ReadLine();

           if (OnProgress != null)
               OnProgress(response.Length);

           if (response == ".")
           {
               done = true;
           }
           else 
           {
               if (response.StartsWith(".."))
                   response = response.Remove(0, 1);

               body.Append(response);
               body.Append("\r\n");
           }
       } while (!done);

       return body.ToString();
   }
+1  A: 

Using a fixed size buffer would actually make the problem worse since you'd have to choose a size larger than you could ever possibly fill. This buffer would get recreated every time -- or if you moved it outside the method, every time the class is created. If you kept it as a class property then it would live as long as the class lives, not merely the length of time that the method executes.

I'd stick with the StringBuilder as long as you need to return the result as a string. The only other option that I could think of would be to change it to use a filter stream instead. That is, create a StreamReader that wraps around the reader you are currently using and which transforms the underlying stream the same way your Download method does. Then the consuming class could simply use your StreamReader implementation and you would only have to handle each chunk, not keep the entire contents in memory at once. This doesn't buy you much if the consuming class requires the whole thing anyway, but I have no idea how you are using it.

tvanfosson
thanks tvanfosson ... yes, i do need to handle the whole thing (body.ToString()).the class this method belongs to is permanently used during the app lifetime ... what im seeing is that garbage collection reclaims a LOT of memory ... so i'm looking into how to minimze this ... thanks for your help
lboregard
You have to use as much memory as you need to hold the string, no getting around that. I'd use the StringBuilder and simply let it allocate as much as needed. If you know what your typical response size is, I'd use that as the default size of the builder, then let it grow as needed rather than choosing the maximum size and usually not filling it up. I believe that it increases size exponentially so it shouldn't take too much overhead to let it grow.
tvanfosson
@tvanfosson: +1 for the correctness.
ileon
A: 

It is hard to make suggestions in the context of this method alone, because the result value is a single string containing the full content of the received data... sooner or later however this method gathers the data, it needs to be turned into a single array of characters.

It may be worth asking the question whether the caller of this method really needs the whole response at a time in memory; could the caller process the data a line at a time so that only one line needs to be in memory at a time? (If so, maybe an IEnumerable<string> implemented with yield statements would work much better without significantly changing the implementation of the method)

With regards to memory allocation, keep in mind that not all memory allocations are created equal in .NET; allocations over about 85kb in size get treated differently and when allocated/released frequently can cause memory fragmentation; see: http://msdn.microsoft.com/en-us/magazine/cc534993.aspx

If fragmentation is your main problem and you only ever process a single document at a time, then creating a single shared buffer and keeping it allocated can help you out a bit there, but check whether this is your problem before assuming it... fixing something that isn't broken is a waste of time.

jerryjvl