views:

141

answers:

2

Is there such a thing as an optimum chunk size for processing large files? I have an upload service (WCF) which is used to accept file uploads ranging from several hundred megabytes.

I've experimented with 4KB, 8KB through to 1MB chunk sizes. Bigger chunk sizes is good for performance (faster processing) but it comes at the cost of memory.

So, is there way to work out the optimum chunk size at the moment of uploading files. How would one go about doing such calculations? Would it be a combination of available memory and the client, CPU and network bandwidth which determines the optimum size?

Cheers

EDIT: Probably should mention that the client app will be in silverlight.

+1  A: 

If you are concerned about running out of resources, then the optimum is probably best determined by evaluating your peek upload concurrency against your system's available memory. How many simultaneous uploads you have in progress at a time would be the key critical variable in any calculation you might do. All you have to do is make sure you have enough memory to handle the upload concurrency, and that's rather trivial to achieve. Memory is cheap and you will likely run out of network bandwidth long before you get to the point where your concurrency would overrun your memory availability.

On the performance side, this isn't the kind of thing you can really optimize much during app design and development. You have to have the system in place, users uploading files for real, and then you can monitor actual runtime performance.

Try a chunk size that matches your network's TCP/IP window size. That's about as optimal as you'd really need to get at design time.

Stephen M. Redd
well i was more referring to client machines (which we don't have any control over). If i set the chunk size to say 1MB it will eat up all the memory on the client machine. But if i set it to low then it will take a long time to process.
Fixer
Oh! with a client machine, it's much simpler. Concurrency is almost non-existent. As long as you aren't keeping the bits in memory after your get them, you can pretty much use whatever chunk size you want. Any modern client, even a phone, has enough CPU and memory to deal with a few files as long as you are streaming the bits to storage after getting each chunk. I doubt you'd see any significant difference in performance at the application level based on just chunk size. I'd go with 1024KB for large files and call it a day.
Stephen M. Redd
A: 

Hi, I am facing issues of not been able to download files of size 500kB to 2 MB from server to client. I have set 8Kb as the chunk size where internet speed is 512kbps while it is 16 KB when 4 Mbps is the internet link.

Is this okie or should i change it. I also checked the TCP/IP windows size and its default is similar to what i have set. Should we make changes to chunk size and tcp windows size both or only chunk size.

Please suggest Cheers TicArch

TicArch