views:

258

answers:

1

On an ASP.net site at my place of work, the following chunk of code is responsible for handling file downloads (NOTE: Response.TransmitFile is not used here because the contents of the download are being streamed from a zip file):

private void DownloadFile( Stream stream)
{
        int bytesRead;
        int chunkSize = 1048576; //1MB

        byte[] readBuffer = new byte[chunkSize];
        while ((bytesRead = stream.Read(readBuffer, 0, readBuffer.Length)) != 0)
            {
                if(!Response.IsClientConnected)
                    break;
                byte[] chunk = new byte[bytesRead];
                Array.Copy(readBuffer,0,chunk,0,bytesRead);
                Response.BinaryWrite(chunk);
                Response.Flush();
        }
        stream.Close();
}

Our users frequently download multi-hundred MB files, which can chew up server memory pretty fast. My assumption is that this is due to response buffering. Does that make sense?

I've just read about the 'buffer' property of the Response object. If I set that to false, will that prevent the Response.BinaryWrite() calls from buffering the data in memory? In general, what is a good way to limit memory usage in this situation? Perhaps I should stream from the zip to a temporary file, then call Response.TransmitFile()?

EDIT: In addition to possible solutions, I'm very interested in explanations of the memory usage issue present in the code above. Why would this consume far more than 1MB, even though Response.Flush is called on every loop iteration? Is it just the unnecessary heap allocation that occurs on every loop iteration (and doesn't get GC'd right away), or is there something else at work?

+3  A: 

Here is some code that I am working on for this. It uses an 8000 byte buffer to send the file in chunks. Some informal testing on a large file showed a significant decrease in memory allocated.

int BufferSize = 8000;
FileStream stream = new FileStream(fileName, FileMode.Open, FileAccess.Read);
try {
  long fileSize = stream.Length;

  long dataLeftToRead = fileSize;
  int chunkLength;
  buffer = new Byte[BufferSize];

  while (dataLeftToRead > 0) {
    if (!Response.IsClientConnected) {
      break;
    }
    chunkLength = stream.Read(buffer, 0, BufferSize);

    Response.OutputStream.Write(buffer, 0, chunkLength);
    Response.Flush();

    dataLeftToRead -= chunkLength;
  }
}
finally {
  if (stream != null) {
    stream.Close();
}

edited to fix a syntax error and a missing value

Ray
pedant comment: 8k = 8192 bytes;
Rubens Farias
thanx - fixed - we geeks should be scrupulously accurate
Ray
What is the difference between using Response.Write vs. writing directly to the Response.OutputStream object?
Odrade
Response.Write writes out string or characters. Using the stream allows you to send binary data.
Ray
So, where does the memory savings come from, other than using a smaller read buffer (8000 bytes vs. 1MB)? Is it just that I'm allocating memory on the heap with the needless call to 'new' on each loop iteration, and garbage collection may not happen for a while?
Odrade
Yes - your loop allocates a new buffer each time, and there is no need to do that. Each time through the loop is another 1MB for the garbage collector to deal with. WriteFile and TransmitFile both seem to allocate a single buffer for the entire file, which, for a large file, can really suck up the memory.
Ray
So how does this approach compare to just sending the entire file via Response.transmitfile in terms of memory utilization on the web server?
JohnFx
I did some informal testing with a large file (3 mb or so). I watched the memory usage with TransmitFile and saw it jump 3 mb. I then did it with my code and saw no jump (8k was either below the threshhold of the monitor, or .net was able to find 8k already available).
Ray
Great, thanks for your help.
Odrade