views:

970

answers:

6

I have to dump a large database over a network pipe that doesn't have that much bandwidth and other people need to use concurrently. If I try it it soaks up all the bandwidth and latency soars and everyone else gets messed up.

I'm aware of the --compress flag to mysqldump which help somewhat.

How can I do this without soaking up all the bandwidth over this connection?

Update:

The suggestion to copy a dumpfile using scp with the -l flag is a good one, but I should note that I don't have ssh access to the database server.

A: 

If you send it over TCP, the bandwidth will be shared equally between all parties. If you want to lower the speed even more, you need to shape your device to only allow a certain amount of data going out.

Filip Ekberg
+1  A: 

One trick I've used is to specify CSV format rather than the insert. It doesn't change how much bandwidth you use per unit time, but it can reduce the total number of bytes you're pulling out.

bradheintz
A: 

On the client, you can run a proxy that will limit the speed of the download. You can also control # of connections etc.

If you are on windows, this should work nicely:

http://www.youngzsoft.net/ccproxy/index.html

Byron Whitlock
+3  A: 

You will have to have access to a linux machine (sorry I'm a linuxy sort of person).

An ingress policy can decrease the amount of incoming traffic, but the server on the other side needs to have a farely well behaved TCP/IP stack.

tc qdisc add dev eth0 handle ffff: ingress
tc filter add dev eth0 parent ffff: protocol ip prio 50 \
  u32 match ip src server.ip.address/32 police rate 256kbit \
  burst 10k drop flowid :1
tc qdisc add dev eth0 root tbf \
  rate 256kbit latency 25ms burst 10k

You can find more information on ingress filters in the advanced routing howto.

http://www.linux.org/docs/ldp/howto/Adv-Routing-HOWTO/index.html


If you are doing it in linux, you can dump the file locally, compress it and use scp to copy the file with the -l switch to limit the bandwidth used:

-l limit Limits the used bandwidth, specified in Kbit/s.

eg

scp -l 16 dumpfile remotehost:filepathandname

Francois Wolmarans
This is a good suggestion, but sadly I don't have ssh access to this server.
ʞɔıu
+2  A: 

trickle?

trickle is a portable lightweight userspace bandwidth shaper

You don't mention how you are actually transffering the DB dump, but if the transfer happens over TCP/IP, trickle should work. For example, if you use nc (for example: nc -L 1234 > backup.sql) the following command will transfer the backup at no greater than 20KB/s:

mysqldump [database name] | trickle -u 20 nc backup.example.com:1234
dbr
A: 

Are you using a transactional table engine like InnoDB? Is this your master database? Be very careful! mysqldump will hold table locks and disrupt the production use of your database. Slowing down the backup will only cause this period of disruption to get longer. Always mysqldump to a local disc, and then copy the dump from there.

One other approach might be to set up a replication slave at your remote site, and take your backups from that. Then database updates will trickle over your contended link instead of coming down in one big lump.

Another alternative: do your backups when noone else is using the network :)

Jon Topper
In this case, it's a production database but it's read-only from production, so shared locks won't affect anything. You can give mysqldump flags not to use any locking though of course this could cause all sorts of other data integrity issues were the tables being updated.
ʞɔıu