views:

2390

answers:

4

I'm making a simple download service so a user can download all his images from out site. To do that i just zip everything to the http stream.

However it seems everything is stored in memory, and the data isn't sent til zip file is complete and the output closed. I want the service to start sending at once, and not use too much memory.

public void ProcessRequest(HttpContext context)
{
    List<string> fileNames = GetFileNames();
    context.Response.ContentType = "application/x-zip-compressed";
    context.Response.AppendHeader("content-disposition", "attachment; filename=files.zip");
    context.Response.ContentEncoding = Encoding.Default;
    context.Response.Charset = "";

    byte[] buffer = new byte[1024 * 8];

    using (ICSharpCode.SharpZipLib.Zip.ZipOutputStream zipOutput = new ICSharpCode.SharpZipLib.Zip.ZipOutputStream(context.Response.OutputStream))
    {
        foreach (string fileName in fileNames)
        {
            ICSharpCode.SharpZipLib.Zip.ZipEntry zipEntry = new ICSharpCode.SharpZipLib.Zip.ZipEntry(fileName);
            zipOutput.PutNextEntry(zipEntry);
            using (var fread = System.IO.File.OpenRead(fileName))
            {
                ICSharpCode.SharpZipLib.Core.StreamUtils.Copy(fread, zipOutput, buffer);
            }
        }
        zipOutput.Finish();
    }

    context.Response.Flush();
    context.Response.End();
}

I can see the the worker process memory growing while it makes the file, and then releases the memory when its done sending. How do i do this without using too much memory?

+6  A: 

Disable response buffering with context.Response.BufferOutput = false; and remove the Flush call from the end of your code.

Jon Skeet
+1. Response.End is harsh and unnecessary also.
AnthonyWJones
There are other ways of handling very large files, I believe, allowing separate requests to get separate chunks of the response. I'd zip the file onto disk to be able to serve it in multiple requests without having to rezip each time.
Jon Skeet
A: 

use Response.BufferOutput = false; at start of ProcessRequest and flush response after each file.

Alex Reitbort
The Flush doesn't do anything useful when buffering is off.
AnthonyWJones
A: 

FYI. This is working code to recursively add an entire tree of files, with streaming to browser:

string path = @"c:\files";

Response.Clear();
Response.ContentType = "application/zip";
Response.AddHeader("Content-Disposition", string.Format("attachment; filename=\"{0}\"", "hive.zip"));
Response.BufferOutput = false;

byte[] buffer = new byte[1024 * 1024];
using (ZipOutputStream zo = new ZipOutputStream(Response.OutputStream, 1024 * 1024)) {
    zo.SetLevel(0);
    DirectoryInfo di = new DirectoryInfo(path);
    foreach (string file in Directory.GetFiles(di.FullName, "*.*", SearchOption.AllDirectories)) {
        string folder = Path.GetDirectoryName(file);
        if (folder.Length > di.FullName.Length) {
            folder = folder.Substring(di.FullName.Length).Trim('\\') + @"\";
        } else {
            folder = string.Empty;
        }
        zo.PutNextEntry(new ZipEntry(folder + Path.GetFileName(file)));
        using (FileStream fs = File.OpenRead(file)) {
            ICSharpCode.SharpZipLib.Core.StreamUtils.Copy(fs, zo, buffer);
        }
        zo.Flush();
        Response.Flush();
    }
    zo.Finish();
}

Response.Flush();
Fredrik Johansson