views:

1107

answers:

4

I would like to create a copy of a database with approximately 40 InnoDB tables and around 1.5GB of data with mysqldump and MySQL 5.1.

What are the best parameters (ie: --single-transaction) that will result in the quickest dump and load of the data?

As well, when loading the data into the second DB, is it quicker to:

1) pipe the results directly to the second MySQL server instance and use the --compress option

or

2) load it from a text file (ie: mysql < my_sql_dump.sql)

+1  A: 

Pipe it directly to another instance, to avoid disk overhead. Don't bother with --compress unless you're running over a slow network, since on a fast LAN or loopback the network overhead doesn't matter.

John Millikin
+1  A: 

hi josh, i think it will be a lot faster and save you disk space if you tried database replication as opposed to using mysqldump. personally i use sqlyog enterprise for my really heavy lifting but there also a number of other tools that can provide the same services. unless of course you would like to use only mysqldump.

jake
Thanks for the link Jake - I'm just interested in copying the DB once, not keeping it in sync.
Josh Schwartzman
A: 

For innodb, --order-by-primary --extended-insert is usually the best combo. If your after every last bit of performance and the target box has many CPU cores, you might want to split the resulting dumpfile and do parallel inserts in many threads, up to innodb_thread_concurrency/2.

Also, tweak the innodb_buffer_pool_size on the target to the max you can afford, and increase innodb_log_file_size to 128 or 256 MB (careful with this, you need to remove the old logfiles before restarting the mysql daemon otherwise it won't restart)

ggiroux
A: 

Use mk-parallel-dump tool from Maatkit.

At least that would probably be faster. I'd trust mysqldump more.

How often are you doing this? Is it really an application performance problem? Perhaps you should design a way of doing this which doesn't need to dump the whole data (replication?)

On the other hand, 1.5G is quite a small database so it probably won't be much of a problem.

MarkR