views:

223

answers:

1

Hi I have multiple databases need to back up daily. Currently, I am using cronjob to set a batch file to back it up. Here are my situation, I have about 10 databases need to backup, 3 of them are growing pretty fast, let me show you the current DB size:

  • DB1 = 35 mb
  • DB2 = 10 mb
  • DB3 = 9 mb
  • the rest: DBx = 5 mb

My batch file code is:

mysqldump -u root -pxxxx DB1 > d:/backup/DB1_datetime.sql
mysqldump -u root -pxxxx DB2 > d:/backup/DB2_datetime.sql
... and so for the rest

I have run this for 2 days, seems quite okay to me. But I wonder, if it will effect my website performance when executing the batch file.

If this method is not good, how do you backup multiple databases while its on live and the size keep increasing daily?

+1  A: 

It depends on table type. If the tables are innoDB, then you should be using the --single-transaction flag so that the dumps are coherent. If you're tables are MyISAM, you have a small issue. If you run the mysqldump as is, the dump will cause the tables to lock (no writing) while performing the dump. This is obviously a huge bottleneck as the databases get larger. You can override this with the --lock-tables=false option, but you can be gauranteed that the backups won't have some inconsistent data in them.

The ideal solution would be to have a backup replication slave server that is outside of your production environment to take dumps of.

Mike Sherov
Looks like i need to set the "--lock-tables=false" option for the moment. I dont have slave server yet. Well, sometimes the file are decreasing, for example yesterday size is 35mb but today size is 28 mb. I opened the sql file, the file corrupt, i mean not properly backup, is it because i didnt lock the tables?
mysqllearner
That's precisely why. For the moment, you should really allow the table lock and have those few seconds of write lock for the sake of a consistent backup
Mike Sherov