tags:

views:

36

answers:

2

Is there anything better (faster or smaller) than pages of plain text CREATE TABLE and INSERT statements for dumping MySql databases? It seems awfully inefficient for large amounts of data.

I realise that the underlying database files can be copied, but I assume they will only work in the same version of MySql that they come from.

Is there a tool I don't know about, or a reason for this lack?

+3  A: 

Not sure if this is what you're after, but I usually pipe the output of mysqldump directly to gzip or bzip2 (etc). It tends to be a considerably faster than dumping to stdout or something like that, and the output files are much smaller thanks to the compression.

mysqldump --all-databases (other options) | gzip > mysql_dump-2010-09-23.sql.gz

It's also possible to dump to XML with the --xml option if you're looking for "portability" at the expense of consuming (much) more disk space than the gzipped SQL...

codekoala
I like it, and I also compress my .sql files, but I'm wondering why MySql can't export in a binary format.
Ollie G
yeah... I haven't looked very closely at this project, but perhaps it will be of interest to you: http://2ze.us/ymI suspect it's still using the regular mysqldump under the hood.
codekoala
Apparently if your tables are all MyISAM, you can use mysqlhotcopy: http://2ze.us/hm
codekoala
A: 

It's worth noting that MySQL has a special syntax for doing bulk inserts. From the manual:

INSERT INTO tbl_name (a,b,c) VALUES(1,2,3),(4,5,6),(7,8,9);

Would insert 3 rows in a single operation. So loading this way isn't as inefficient as it might otherwise be with one statement per row, and instead of 129 bytes in 3 INSERT statements, this is 59 bytes, and that advantage only gets bigger the more rows you have.

Gaius