views:

1068

answers:

3

I am currently running mysqldump on a Mysql slave to backup our database. This has worked fine for backing up our data itself, but what I would like to supplement it with is the binary log position of the master that corresponds with the data generated by the mysqldump.

Doing this would allow us to restore our slave (or setup new slaves) without having to do a separate mysqldump on the main database where we grab the binary log position of the master. We would just take the data generated by the mysqldump, combine it with the binary log information we generated, and voila... be resynced.

So far, my research has gotten me very CLOSE to being able to accomplish this goal, but I can't seem to figure out an automated way to pull it off. Here are the "almosts" I've uncovered:

  • If we were running mysqldump from the main database, we could use the "--master-data" parameter with mysqldump to log the master's binary position along with the dump data (I presume this would probably also work if we started generating binary logs from our slave, but that seems like overkill for what we want to accomplish)
  • If we wanted to do this in a non-automated way, we could log into the slave's database and run "STOP SLAVE SQL_THREAD;" followed by "SHOW SLAVE STATUS;" (http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html). But this isn't going to do us any good unless we know in advance we want to back something up from the salve.
  • If we had $500/year to blow, we could use the InnoDb hot backup plugin and just run our mysqldumps from the main DB. But we don't have that money, and I don't want to add any extra I/O on our main DB anyway.

This seems like something common enough that somebody must have figured out before, hopefully that somebody is using Stack Overflow?

+3  A: 

The following shell script will run in cron or periodic, replace variables as necessary (defaults are written for FreeBSD):

# MySQL executable location
mysql=/usr/local/bin/mysql

# MySQLDump location
mysqldump=/usr/local/bin/mysqldump

# MySQL Username and password
userpassword=" --user=<username> --password=<password>"

# MySQL dump options
dumpoptions=" --quick --add-drop-table --add-locks --extended-insert"

# Databases
databases="db1 db2 db3"

# Backup Directory
backupdir=/usr/backups

# Flush and Lock
mysql $userpassword -e 'FLUSH TABLES WITH READ LOCK'

set `date +'%Y %m %d'`

# Binary Log Positions
masterlogfile=`$mysql $userpassword -e 'SHOW SLAVE STATUS \G' | grep '[^_]Master_Log_File'`
masterlogpos=`$mysql $userpassword -e 'SHOW SLAVE STATUS \G' | grep 'Read_Master_Log_Pos'`

# Write Binlog Info
echo $masterlogfile >> ${backupdir}/info-$1-$2-$3.txt
echo $masterlogpos >> ${backupdir}/info-$1-$2-$3.txt

# Dump all of our databases
echo "Dumping MySQL Databases"
for database in $databases
do
$mysqldump $userpassword $dumpoptions $database | gzip - > ${backupdir}/${database}-$1-$2-$3.sql.gz
done

# Unlock
$mysql $userpassword -e 'UNLOCK TABLES'

echo "Dump Complete!"

exit 0
Ross Duggan
Yup, that is similar to my second scenario, above. If the Mysql docs are to be believed, you can get the binary position of the master from the slave by stopping the slave thread and showing the slave's status. This doesn't require locking the master. But I'm hoping to find an automated solution that will automatically store the bin log position in the course of running our everyday backup.
wbharding
Hey, had totally forgotten that the master status can be retrieved from the slave! Cheers for the reminder.I've added the information to the shellscript that performs daily backups, so we should have binary log information written out now alongside the backups.I'll add the info to my answer, will only be directly applicable if you're using a *nix system, but I'm sure if you're working on a Windows system you have your own way of doing it :)
Ross Duggan
The OP really doesn't want to do a LOCK TABLES WITH READ LOCK on the master. Nor does anybody, really.
MarkR
That's not a read lock on the master, it's a read lock on the slave.
Ross Duggan
Hey Ross, great job, that *almost* works for me. If you could make the following modifications I'll accept this as the answer:* All of the "-u <username> -p<password>" bits should be replaced with the $userpassword variable you declared* Path to Mysql and Mysqldump should be declared as variables in top area, alongside username and such. In Debian (which is what I'm running) they're in a different location.Other than that, this script has successfully dumped position+database in my test. Going to test re-importing it a bit later to verify it's all kosher, but looks very promising.
wbharding
Also, it'd probably be best to just remove the top bit about locking master. Your second, slave-only solution is exactly what the question calls for.
wbharding
Hey, thanks for the cleanup tips, should be good now! Glad it helped.
Ross Duggan
A: 

You're second option looks like the right track.

I had to figure a way to do differential backups using mysqldump. I ended up writing a script that chose what databases to back up and then executed mysqldump. Couldn't you create a script that followed the steps mentioned in http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_master-data and call that from a cron job?

  1. connect to mysql and "stop slave"
  2. execute SHOW SLAVE STATUS
  3. store file_name, file_pos in variables
  4. dump and restart the slave.

Just a thought but I'm guessing you could append the "CHANGE MASTER TO" line to the dumpfile and it would get executed when you restored/setup the new slave.

joatis
A: 

Can I ask if this tip worked - we're you able to do a point-in-time recovery using the dump taken from the slave and the binary logs from the master, using the binary log position information?

David Felton
I've used this for successful point-in-time recovery.
Ross Duggan