views:

2330

answers:

7

Are there any tricks for speeding up mySQL dumps and imports? This would include my.cnf settings, using ramdisks, etc.

+1  A: 

Using extended inserts in dumps should make imports faster.

che
if you do that there is a good chance you will not be able to import back if the dump is even moderately big
Jonathan
How come MySQL client isn't able to process even moderately big dumps with extended inserts?
che
@che: My guess is that the client has a fixed size buffer for each line it is reading and extended inserts exceeds that limit.
Ztyx
+2  A: 
  1. Get a copy of High Performance MySQL. Great book.
  2. Extended inserts in dumps
  3. Dump with --tab format so you can use mysqlimport, which is faster than mysql < dumpfile
  4. Import with multiple threads, one for each table.
  5. Use a different database engine if possible. importing into a heavily transactional engine like innodb is awfully slow. Inserting into a non-transactional engine like MyISAM is much much faster.
  6. Look at the table compare script in the Maakit toolkit and see if you can update your tables rather than dumping them and importing them. But you're probably talking about backups/restores.
JBB
+3  A: 

turn off foreign key checks and turn on auto-commit.

longneck
+2  A: 

if you are importing to innodb the single most effective thing you can do is to put

innodb_flush_log_at_trx_commit = 2

in your my.cnf, temporarily while the import is running. you can put it back to 1 if you need ACID

Aleksandar Ivanisevic
+7  A: 

http://www.maatkit.org/ has a mk-parallel-dump and mk-parallel-restore

If you’ve been wishing for multi-threaded mysqldump, wish no more. This tool dumps MySQL tables in parallel. It is a much smarter mysqldump that can either act as a wrapper for mysqldump (with sensible default behavior) or as a wrapper around SELECT INTO OUTFILE. It is designed for high-performance applications on very large data sizes, where speed matters a lot. It takes advantage of multiple CPUs and disks to dump your data much faster.

There are also various potential options in mysqldump such as not making indexes while the dump is being imported - but instead doing them en-mass on the completion.

Alister Bulman
According to the mk-parallel-dump man page (http://www.maatkit.org/doc/mk-parallel-dump.html) it should not be used for backup. Beware!
Ztyx
A: 

I guess your question also depends on where the bottleneck is:

  • If your network is a bottleneck you could also have a look at the -C/--compress flag to mysqldump.
  • If your computer runs out of memory (ie. starts swapping) you should buy more memory.

Also, have a look at the --quick flag for mysqldump (and --disable-keys if you are using InnoDB).

Ztyx
A: 

mysqlhotcopy might be an alternative for you too if you only have MyIsam tables.

Ztyx