I need to download a LARGE file (2GB) over HTTP in a C# console app. Problem is, after about 1.2GB, the app runs out of memory.
Here's the code I'm using:
WebClient request = new WebClient();
request.Credentials = new NetworkCredential(username, password);
byte[] fileData = request.DownloadData(baseURL + fName);
As you can see... I'm reading the file directly into memory. I'm pretty sure I could solve this if I were to read the data back from HTTP in chunks and write it to a file on disk.
Does anyone know how I could do this?