views:

859

answers:

4

My ASP.NET application allows users to upload and download large files. Both procedures involve reading and writing filestreams. What should I do to ensure the application doesn't hang or crash when it handles a large file? Should the file operations be handled on a worker thread for example?

+1  A: 

Make sure you properly buffer the files so that they don't take inordinate amounts of memory in the system.

e.g. excerpt from a download application, inside the while loop that reads the file:

// Read the data in buffer.
length = iStream.Read(buffer, 0, bufferSize);

// Write the data to the current output stream.
Response.OutputStream.Write(buffer, 0, length);

Where bufferSize is something reasonable, e.g. 100000 bytes, the trade-off is that it will be slower for smaller buffer sizes.

http://support.microsoft.com/kb/812406

Edit: Also be sure that IIS is set to take a large enough request length (IIS7) and timeout.

Turnkey
A: 

Unless this is the primary purpose of your site consider partitioning these operations to a separate application, e.g. a sub-application or sub-domain. Besides reducing risk this would also simplify scaling out as your user base grows.

Cristian Libardo
A: 

Thanks.

Turnkey - is there any practical difference between doing this (which is what I was doing):

    long fileSize;
    byte[] fileContent;

    using (FileStream sourceFile = new FileStream(filePath, FileMode.Open))
    {
        fileSize = sourceFile.Length;
        fileContent = new byte[(int)fileSize];
        sourceFile.Read(fileContent, 0, (int)sourceFile.Length);
        sourceFile.Close();
    }

    HttpContext.Current.Response.BinaryWrite(fileContent);

and this (which is what MS suggest in the knowledgebase article):

        fileStream = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.Read);
        fileStreamLength = fileStream.Length;     

        while (fileStreamLength > 0)
        {
            if (HttpContext.Current.Response.IsClientConnected)
            {
                length = fileStream.Read(buffer, 0, 10000);
                HttpContext.Current.Response.OutputStream.Write(buffer, 0, length);
                HttpContext.Current.Response.Flush();
                buffer = new Byte[10000];
                fileStreamLength = fileStreamLength - length;
            }
            else
            {
                fileStreamLength = -1;
            }
        }

(code edited for brevity).

For example, does the first method actually use more server memory?

flesh
Yes, the method you were using will consume much more memory as it allocates it for the file being loaded before doing the binary write back to the http response. If several users are downloading large files you could run low on memory. This was a problem we saw before using this method.
Turnkey
super, thanks for your help.
flesh
A: 

I dont know why it must reset buffer for each loop "buffer = new Byte[10000];" in th MS suggest.