tags:

views:

1011

answers:

6

I have create windows application that routine download file from load balance server, currently the speed is about 30MB/second. However I try to use FastCopy or TeraCopy it can copy at about 100MB/second. I want to know how to improve my copy speed to make it can copy file faster than currently.

+2  A: 

Possibly your application can do multi-threading to get the file using multiple threads, however the bandwidth is limited to the speed of the devices that transfer the content

Ram
A: 

Simplest way is to open the file in raw/binary mode (thats C speak not sure waht the C# equivalent is) and read and write very large blocks (several MB) at a time.

James Anderson
+1  A: 

The trick TeraCopy uses is to make the reading and writing asynchronous. This means that a block of data can be written while another one is being read.

You have to fiddle around with the number of blocks and the size of those blocks to get the optimum for your situation. I used this method using C++ and for us the optimum was using four blocks of 256KB when copying from a network share to a local disk.

Regards,

Sebastiaan

Sebastiaan Megens
+1  A: 

If you run Process Monitor you can see the block sizes that Windows Explorer or TeraCopy are using.

In Vista the default block size for the local network is afair 2 MB, which makes copying files over a huge pipe a lot faster.

VVS
A: 

Why reinvent the wheel?

If your situation permits, you are probably better off shelling out to one of the existing "fast" copy utilities than trying to write one yourself. There are numerous non-obvious edge-cases which need to be handled, and getting consistently good perf requires lots of trial-end-error experimentation.

Addys
Can I use "fast" copy utilities for scripting (command line) ??
alhambraeidos
A least some are scriptable - robocopy is one example.
Addys
+1  A: 

One common mistake when using streams is to copy a byte at a time, or to use a small buffer. Most of the time it takes to write data to disk is spent seeking, so using a larger buffer will reduce your average seek time per byte.

Operating systems write files to disk in clusters. This means that when you write a single byte to disk Windows will actually write a block between 512 bytes and 64 kb in size. You can get much better disk performance by using a buffer that is an integer multiple of 64kb.

Additionally, you can get a boost from using a buffer that is a multiple of your CPUs underlying memory page size. For x86/x64 machines this can be set to either 4kb or 4mb.

So you want to use an integer multiple of 4mb.

Additionally if you use asynchronous IO you can fully take advantage of the large buffer size.

class Downloader
{
    const int size = 4096 * 1024;
    ManualResetEvent done = new ManualResetEvent(false);
    Socket socket;
    Stream stream;

    void InternalWrite(IAsyncResult ar)
    {
        var read = socket.EndReceive(ar);
        if (read == size)
            InternalRead();
        stream.Write((byte[])ar.AsyncState, 0, read);
        if (read != size)
            done.Set();
    }

    void InternalRead()
    {
        var buffer = new byte[size];
        socket.BeginReceive(buffer, 0, size, System.Net.Sockets.SocketFlags.None, InternalWrite, buffer);
    }

    public bool Save(Socket socket, Stream stream)
    {
        this.socket = socket;
        this.stream = stream;

        InternalRead();
        return done.WaitOne();
    }
}

bool Save(System.Net.Sockets.Socket socket, string filename)
{
    using (var stream = File.OpenWrite(filename))
    {
        var downloader = new Downloader();
        return downloader.Save(socket, stream);
    }
}
Stefan Rusek