views:

452

answers:

3

I'm working on developing a regular dump of our database. I'm using this script to create the backup and then feeding it through a regular cron job. In the end we end up with a text file as well as an email archive of everything.

The problem we've encountered is the size of two of our tables. They each have 60k fields and grow daily. I'm thinking incremental backup are the best solution for backup, but if it ever came to restoring it... It will be a huge project.

My question is a two parter: a) is there a more straight forward way to backup huge tables on a daily basis and, if not, b) is there an easy way to restore a backup from daily/weekly incremental backups?

Thanks!

+5  A: 

You can use a crob job to execute mysqldump to dump the tables down that you wish to backup. Then just use a differential backup on that file for the daily backup and do a full backup once a week. The backup's heavy lifting should be done by your backup engine of choice.

Restoring a DB is never fun, so regardless it will be a big project.

http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html

Let me know if this works for you.

danielrsmith
+1  A: 

You may wish to check out Maatkit. It's a bunch of perl scripts. One of which is mk-parallel-dump which spawns multiple copies of mysqldump (by default, 1 per CPU in the machine) allowing the dump to go MUCH faster. You can set this up in a cron job as well, like Daniel suggested.

Grant Limberg
A: 

Consider using LVM snapshots to take periodic snapshots of your datasource, then use maatkit on the snapped copy in order to store an SQL dump. Do this on a slave MySQL db so that your backups don't cause performance problems for your live platform.

Jon Topper