tags:

views:

171

answers:

4

We're using a simple File.Copy in C# for moving our database backups to extra locations.

However on some servers, this causes the SQL server to pretty much stop working. These servers have very limited memory, so they're paging data out to the harddrive every so often.

Whilst we should buy more memory, this is not going to happen for a long time :-/

So I'm wondering if I can somehow limit the speed of the File.Copy operation ? (Thereby giving the SQL server some room to access the harddrive)

I could use an "old school" approach with two streams, reading and writing through a buffer, and just sleep 5 ms or so between reads. But I'd really prefer a neater solution, if such a one is available.

+1  A: 

Have you tried giving your copy process a priority below normal? You can do so via Task Manager or using the start command:

> start myCopyApp.exe /BELOWNORMAL
0xA3
That will work if the system is CPU-bound, but maybe not if it's I/O-bound?
ChrisW
As the OP states 'These servers have very limited memory', I would assume it is neither of these, but memory restriction.
Frank
@Frank I think that memory restriction causes disk access (to the swap file), and is therefore equivalent to being I/O-bound.
ChrisW
ChrisW I as well believe it's I/O bound - like I said they have very little memory compared to the size of the database, so they're paging a lot, which indeed causes disk access.I doubt lower priority will help much, since the CPU has plenty headroom left whilst copying. Thanks for the idea though :-)
Steffen
+3  A: 

CopyFileEx might do what you need - it has a callback function that you could use as an artificial slowing method (haven't tried that for this scenario though so I am unsure about real effects - worth a try IMHO).

Jaroslav Jandek
I might give that a go - I'll see if MSDN has some more information about the callback (how often it's called an so forth)
Steffen
It should be way faster than using C# streams.The buffer is **65536B** on WXP (notification happens when chunk is copied). On Vista and W7 it is larger.You could also copy the file only to 1 machine and replicate the file from there to other backup locations.
Jaroslav Jandek
Sounds good, my primary concern with streams is in fact the performance of it compared to a native API function (which File.Copy is as well: CopyFile)I'll check it out on tuesday, as I can't get to work on it any sooner.
Steffen
Just implemented CopyFileEx - and sleeping in the callback (even ever so briefly) does exactly what I want. It gives the server some breath to do other things.So thanks for the good advice :-)
Steffen
+2  A: 

One isn't available through File.Copy. You have a number of other options. You can, as you say, stream the bytes over manually, occasionally sleeping. You could use an implementation of BITS, though this is a little OTT.

Also, if the problem is memory - compress the file or chunk it into smaller files to be rebuilt later.

Adam
Note: for SQL Server 2008+ you can enable compression of backups in the "Back Up Database" page.
Jaroslav Jandek
Good call with compression, however these are still stuck with SQL Server 2005 :-(
Steffen
A: 

I could use an "old school" approach with two streams, reading and writing through a buffer, and just sleep 5 ms or so between reads.

If you do, look at using the FILE_FLAG_NO_BUFFERING flag: otherwise no matter how small your application buffer is, the file system will be buffering (and therefore causing extra swapping).

ChrisW
Since the buffer memory would be used a LOT, Windows *should* not swap that part of memory (+ it has pretty small memory footprint) but instead swap other (less used) parts of memory.Also, CPU and disk IO operations would raise significantly.
Jaroslav Jandek
@Jaroslav Jandek - FILE_FLAG_NO_BUFFERING on the input and output files would affect/disable buffering (e.g. read-ahead buffering) which would otherwise be performed implicitly within/by the file system driver. I'm suggesting that this buffering should be disabled, because having this file system buffering enabled (which it is by default) takes memory and therefore causes swapping, and he doesn't even need it (because a file copy only wants to touch each part of the file once).
ChrisW
@ChrisW: You need to read the data somewhere (to a buffer) and then write to a file (that can be unbuffered - direct write). If the operation wouldn't be sequential, the buffer would be wasted. In this case it is required no matter what. For copying, that approach is useless since the system already does that effectively (if you were doing ReadFile and WriteFile manually, it would be different).Also it would require lots of WINAPI handling and correct buffer sizes and alignment, to much the sectors of the filesystem...
Jaroslav Jandek
@ChrisW: If the question were *I want to copy a large file as fast as possible*, I would agree with you. But since sequential unbuffered IO's advantage is truly noticable only with large buffer size (1MB+), it would use way more memory than the buffered IO (also CPU and disk IO per buffer (NOT per byte ofc.)).
Jaroslav Jandek
@Jaroslav Jandek - I'm suggesting that he use unbuffered I/O with the minimal application buffer, in order to use a total of exactly one page of memory (one page of application buffer memory, and no pages in the file system driver). Not to optimize for speed, but to optimize for minimal memory utilization.
ChrisW
@ChrisW: I see. Having ~128kB of extra memory allocated doesn't look like much overhead to me (even several MBs should not be an issue IMHO). If their servers were 486 with 24MB of memory I would see that as a problem... I am just saying that your suggestion is a case of overoptimization. Using `FileStream` with `FileOptions.SequentialScan` will cost you only those ~128kB of memory (which won't swap) and about 2 CPU cycles per byte, which is insignificant in this case (IMHO).
Jaroslav Jandek