tags:

views:

384

answers:

7

I currently have a download site for my school that is based in .net. We offer anything from antivirus, autocad, spss, office, and a number of large applications for students to download. It's currently setup to handle them in 1 of 2 ways; anything over 800 megs is directly accessable through a seperate website while under 800 megs is secured behind .net code using a filestream to feed it to the user in 10,000 byte chunks. I have all sorts of issues with feeding downloads this way... I'd like to be able to secure the large downloads, but the .net site just can't handle it, and the smaller files will often fail. Is there a better approach to this?

edit - I just wanted to update on how I finally solved this: I ended up adding my download directory as a virtual directory in iis and specified custom http handler. The handler grabbed the file name from the request and checked for permissions based on that, then either redirected the users to a error/login page, or let the download continue. I've had no problems with this solution, and I've been on it for probably 7 months now, serving files several gigs in size.

+2  A: 

Look into bit torrent. It's designed specifically for this sort of thing and is quite flexible.

Jim Blizard
+5  A: 

If you are having performance issues and you are delivering files that exist on the filesystem (versus a DB), use the HttpResponse.TransmitFile function.

As for the failures, you likely have a bug. If you post the code you may be better response.

DSO
A: 

Whats wrong with using a robust web server (like Apache) and let it deal with files. Just as you now separate larger files to a webserver, why not serve smaller files the same way too?

Is there some hidden requirements to prevent this?

Tuminoid
A: 

Ok, this is what it currently looks like...

    Stream iStream = null;

// Buffer to read 10K bytes in chunk:
byte[] buffer = new Byte[10000];

// Length of the file:
int length;

// Total bytes to read:
long dataToRead;

if (File.Exists(localfilename))
{
    try
    {
        // Open the file.
        iStream = new System.IO.FileStream(localfilename, System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.Read);

        // Total bytes to read:
        dataToRead = iStream.Length;

        context.Response.Clear();
        context.Response.Buffer = false;
        context.Response.ContentType = "application/octet-stream";
        Int64 fileLength = iStream.Length;
        context.Response.AddHeader("Content-Length", fileLength.ToString());
        context.Response.AddHeader("Content-Disposition", "attachment; filename=" + originalFilename);

        // Read the bytes.
        while (dataToRead > 0)
        {
            // Verify that the client is connected.
            if (context.Response.IsClientConnected)
            {
                // Read the data in buffer.
                length = iStream.Read(buffer, 0, 10000);

                // Write the data to the current output stream.
                context.Response.OutputStream.Write(buffer, 0, length);

                // Flush the data to the HTML output.
                context.Response.Flush();

                buffer = new Byte[10000];
                dataToRead = dataToRead - length;
            }
            else
            {
                //prevent infinite loop if user disconnects
                dataToRead = -1;
            }
        }
        iStream.Close();
        iStream.Dispose();
    }
    catch (Exception ex)
    {
        if (iStream != null)
        {
            iStream.Close();
            iStream.Dispose();
        }
        if (ex.Message.Contains("The remote host closed the connection"))
        {
            context.Server.ClearError();
            context.Trace.Warn("GetFile", "The remote host closed the connection");
        }
        else
        {
            context.Trace.Warn("IHttpHandler", "DownloadFile: - Error occurred");
            context.Trace.Warn("IHttpHandler", "DownloadFile: - Exception", ex);
        }
        context.Response.Redirect("default.aspx");
    }
}
Arthurdent510
(1) line "buffer = new Byte[10000];" inside the loop seams unnecessary, (2) have you tried increasing the buffer size?
1 - I can take that out... this is code that I inherited, so I left that in. 2 - I've played around with the buffer size, but it never seemed to help any, so I left it at 10000
Arthurdent510
A: 

There's a lot of licensing restrictions... for example we have an Office 2007 license agreement that says any technical staff on campus can download and install Office, but not students. So we don't let students download it. So our solution was to hide those downloads behind .net.

Arthurdent510
So you already filter based on IP, or logins, they can be applied to a webserver as well. You do it right now somehow also (unless its just who has the exe can download type of "auth"...), it can be applied with webserver, or webserver+login page?
Tuminoid
The problem is our campus spread out, as well as many online students, so we can't restrict by ip to the site. The security of the site is based on logins
Arthurdent510
Eh, the world is full websites that require you to login, then they allow you to access the content they provide. What is wrong with this basic, tested concept that makes you unable to use it?
Tuminoid
+1  A: 

I have two recommendations:

  • Increase the buffer size so that there are less iterations

AND/OR

  • Do not call IsClientConnected on each iteration.

The reason is that according to Microsoft Guidelines:

Response.IsClientConnected has some costs, so only use it before an operation that takes at least, say 500 milliseconds (that's a long time if you're trying to sustain a throughput of dozens of pages per second). As a general rule of thumb, don't call it in every iteration of a tight loop

I actually put in the TransmitFile method and it's working alot better now... on my dev servers I've noticed quite an increase in download speed, which could be a result of not making all the IsClientConnected calls. Is there any dis/advantage of using this method?
Arthurdent510
Be aware of some caveats when using TransmitFile, check out this article: http://www.improve.dk/blog/2008/03/29/response-transmitfile-close-will-kill-your-application
A: 

Amazon S3 sounds ideal for what you need, but it is commercial service and fileas are served from their servers.

You should try to contact amazon and ask for academic pricing. Even if they don't have one.

dmajkic