views:

286

answers:

4

In this scenario there are two database servers: one is in our LAN (Server A) and one on our remote location (Server B).

We want to transfer the data of our projects from Server A to Server B. To do this we first remove the existing data on Server B for a certain ProjectID and after that simply insert the data from Server A to Server B. All data is prepared on Server A.

The actual amount of data transfered is approximatically 2.5MB. With our 20MBit connection this should be done in a flash. However with SQL it takes 30-40 seconds. Actually if you would take the same amount of data and transfer it with FTP, it takes 4 sec. :)

I have SET NO COUNT turned on. I've read that this could make remote queries faster.
What could be causing this slow transfer?

EDIT:
The SQL should really be the cause. It breaks down like this:
- Select data from all kinds of databases and insert it into a DB on Server A
- Delete from Server B DB Where ProjectID = x
- Insert into Server B DB - Select * from Server A DB Where ProjectID = x

The last two steps take about 40 seconds. And as you can see all I do is remove the old records and insert the new ones. No joins or difficult t-sql syntax.

A: 

Do you select from A, insert to B, delete from A ?

You can just select from A, insert to B and when you are finished you can delete

also you can use bulk operation (select multiple ProjectID's) and insert them into the other server.

make sure ProjectID is a Primary key, or Clustered Index... if not you gotta find a solution that doesn't need searching by ProjectID first.

also you can disable all logging, backup, change tracking and statistics on both servers if you dont need any

finally, a multi-threaded or multiple process can give you some benefit (select from Server A while inserting in Server B).

Ahmed Khalaf
A: 

You can also use mysqldump:

mysqldump --complete-insert --create-options --add-locks --disable-keys --extended-insert --quick --quote-names -u $user --password=$password $database | gzip --fast -c>{$backupPath}/$database.$dateStr.sql.gz

Then transfer gzipped dump over network and load it back to a new database.

FractalizeR
it's sql server ;)
Zyphrax
OMG :) But MySQL is an SQL server also in general :)Anyway, SQL dump is applicable: http://sqldump.sourceforge.net/
FractalizeR
Jaywalker
+1  A: 

I would thoroughly check the design and structure of the database B where the deletes occur. Check to make sure the indexes are well designed and optimized, check any foreign key relationships and what happens with any cascaded deletes to other tables that reference the table where the deletes are occuring, also check any delete related triggers and see what they are doing.

I would also check the remote server itself and make sure there aren't any issues with the hardware, such as a failing hard drive or drive controller. I had a database one time that was suffering from massive slowness. It took them a month to finally identify the culprit as a failing RAID card.

If none of that is the culprit, try running some throughput tests over the network between to two servers to see if you're really getting the bandwidth you're supposed to be getting. Look and see what other data is sharing that 20MBit connection. Perhaps there is an overloaded/failing router or switch at some point or one of the routers/switches has incorrect DNS information and is failing to send the data along the correct path.

BBlake
A: 

HAve you checked the table on server b to see if you have any triggers? Perhpas they are causing the slowdown.

HLGEM
Just a plain database, no triggers
Zyphrax