tags:

views:

2027

answers:

15

How do I do backups in MySQL?

I'm hoping there'll be something better than just running mysqldump every "x" hours.

Is there anything like SQL Server has, where you can take a full backup each day, and then incrementals every hour, so if your DB dies you can restore up to the latest backup?

Something like the DB log, where as long as the log doesn't die, you can restore up to the exact point where the DB died?

Also, how do these things affect locking? I'd expect the online transactions to be locked for a while if I do a mysqldump.

+10  A: 

You might want to look at incremental backups.

Kyle Cronin
@Justin Tanner Worked for me.
Chris Thompson
@Chris I just updated it
Kyle Cronin
+4  A: 

now i am beginning to sound like a marketeer for this product. i answered a question with it here, then i answered another with it again here.

in a nutshell, try sqlyog (enterprise in your case) from webyog for all your mysql requirements. it not only schedules backups, but also schedules synchronization so you can actually replicate your database to a remote server.

it has a free community edition as well as an enterprise edition. i recommend the later to you though i also reccomend you start with the comm edition and first see how you like it.

jake
A: 

@Jake,

Thanks for the info. Now, it looks like only the commercial version has backup features.

Isn't there ANYTHING built into MySQL to do decent backups?

The official MySQL page even recommends things like "well, you can copy the files, AS LONG AS THEY'RE NOT BEING UPDATED"...

Daniel Magliola
does replication count as backing up? the ibbackup product referenced below seems to be Linux solution with no windows equivalent
jake
+4  A: 

I use mysqlhotcopy, a fast on-line hot-backup utility for local MySQL databases and tables. I'm pretty happy with it.

Leon Timmermans
+2  A: 

You might want to supplement your current offline backup scheme with MySQL replication.

Then if you have a hardware failure you can just swap machines. If you catch the failure quickly you're users won't even notice any downtime or data loss.

Harry
My website is currently on a shared host - is replication something I would have to implement on my own or is this something I should contact my host about?
Ali
+1  A: 

The problem with a straight backup of the mysql database folder is that the backup will not necessarily be consistent, unless you do a write-lock during the backup.

I run a script that iterates through all of the databases, doing a mysqldump and gzip on each to a backup folder, and then backup that folder to tape.

This, however, means that there is no such thing as incremental backups, since the nightly dump is a complete dump. But I would argue that this could be a good thing, since a restore from a full backup will be a significantly quicker process than restoring from incrementals - and if you are backing up to tape, it will likely mean gathering a number of tapes before you can do a full restore.

In any case, whichever backup plan you go with, make sure to do a trial restore to ensure that it works, and get an idea of how long it might take, and exactly what the steps are that you need to go through.

Brent
+3  A: 

I use a simple script that dumps the mysql database into a tar.gz file, encrypts it using gpg and sends it to a mail account (Google Mail, but that's irrelevant really)

The script is a Python script, which basically runs the following command, and emails the output file.

mysqldump -u theuser -p mypassword thedatabase | gzip -9 - | gpg -e -r 12345 -r 23456 > 2008_01_02.tar.gz.gpg

This is the entire backup. It also has the web-backup part, which just tar/gzips/encrypts the files. It's a fairly small site, so the web backups are much less than 20MB, so can be sent to the GMail account without problem (the MySQL dumps are tiny, about 300KB compressed). It's extremely basic, and won't scale very well. I run it once a week using cron.

I'm not quite sure how we're supposed to put longish scripts in answers, so I'll just shove it as a code-block..

#!/usr/bin/env python
#encoding:utf-8
#
# Creates a GPG encrypted web and database backups, and emails it

import os, sys, time, commands

################################################
### Config

DATE = time.strftime("%Y-%m-%d_%H-%M")

# MySQL login
SQL_USER = "mysqluser"
SQL_PASS = "mysqlpassword"
SQL_DB = "databasename"

# Email addresses
BACKUP_EMAIL=["[email protected]", "[email protected]"] # Array of email(s)
FROM_EMAIL = "[email protected]" # Only one email

# Temp backup locations
DB_BACKUP="/home/backupuser/db_backup/mysite_db-%(date)s.sql.gz.gpg" % {'date':DATE}
WEB_BACKUP="/home/backupuser/web_backup/mysite_web-%(date)s.tar.gz.gpg" % {'date':DATE}

# Email subjects
DB_EMAIL_SUBJECT="%(date)s/db/mysite" % {'date':DATE}
WEB_EMAIL_SUBJECT="%(date)s/web/mysite" % {'date':DATE}

GPG_RECP = ["MrAdmin","MrOtherAdmin"]
### end Config
################################################

################################################
### Process config
GPG_RECP = " ".join(["-r %s" % (x) for x in GPG_RECP]) # Format GPG_RECP as arg

sql_backup_command = "mysqldump -u %(SQL_USER)s -p%(SQL_PASS)s %(SQL_DB)s | gzip -9 - | gpg -e %(GPG_RECP)s > %(DB_BACKUP)s" % {
    'GPG_RECP':GPG_RECP,
    'DB_BACKUP':DB_BACKUP,
    'SQL_USER':SQL_USER,
    'SQL_PASS':SQL_PASS,
    'SQL_DB':SQL_DB
}

web_backup_command = "cd /var/www/; tar -c mysite.org/ | gzip -9 | gpg -e %(GPG_RECP)s > %(WEB_BACKUP)s" % {
    'GPG_RECP':GPG_RECP,
    'WEB_BACKUP':WEB_BACKUP,
}
# end Process config
################################################

################################################
### Main application
def main():
        """Main backup function"""
        print "Backing commencing at %s" % (DATE)

        # Run commands
        print "Creating db backup..."
        sql_status,sql_cmd_out = commands.getstatusoutput(sql_backup_command)
        if sql_status == 0:
                db_file_size = round(float( os.stat(DB_BACKUP)[6]  ) /1024/1024, 2) # Get file-size in MB
                print "..successful (%.2fMB)" % (db_file_size)
                try:
                    send_mail(
                        send_from = FROM_EMAIL,
                        send_to   = BACKUP_EMAIL,
                        subject   = DB_EMAIL_SUBJECT,
                        text      = "Database backup",
                        files     = [DB_BACKUP],
                        server    = "localhost"
                    )
                    print "Sending db backup successful"
                except Exception,errormsg:
                    print "Sending db backup FAILED. Error was:",errormsg
                #end try

                # Remove backup file
                print "Removing db backup..."
                try:
                        os.remove(DB_BACKUP)
                        print "...successful"
                except Exception, errormsg:
                        print "...FAILED. Error was: %s" % (errormsg)
                #end try
        else:
                print "Creating db backup FAILED. Output was:", sql_cmd_out
        #end if sql_status

        print "Creating web backup..."
        web_status,web_cmd_out = commands.getstatusoutput(web_backup_command)
        if web_status == 0:
                web_file_size = round(float( os.stat(WEB_BACKUP)[6]  ) /1024/1024, 2) # File size in MB
                print "..successful (%.2fMB)" % (web_file_size)
                try:
                    send_mail(
                        send_from = FROM_EMAIL,
                        send_to   = BACKUP_EMAIL,
                        subject   = WEB_EMAIL_SUBJECT,
                        text      = "Website backup",
                        files     = [WEB_BACKUP],
                        server    = "localhost"
                    )
                    print "Sending web backup successful"
                except Exception,errormsg:
                    print "Sending web backup FAIELD. Error was: %s" % (errormsg)
                #end try

                # Remove backup file
                print "Removing web backup..."
                try:
                        os.remove(WEB_BACKUP)
                        print "...successful"
                except Exception, errormsg:
                        print "...FAILED. Error was: %s" % (errormsg)
                #end try
        else:
                print "Creating web backup FAILED. Output was:", web_cmd_out
        #end if web_status
#end main
################################################

################################################
# Send email function

# needed email libs..
import smtplib
from email.MIMEMultipart import MIMEMultipart
from email.MIMEBase import MIMEBase
from email.MIMEText import MIMEText
from email.Utils import COMMASPACE, formatdate
from email import Encoders

def send_mail(send_from, send_to, subject, text, files=[], server="localhost"):
        assert type(send_to)==list
        assert type(files)==list

        msg = MIMEMultipart()
        msg['From'] = send_from
        msg['To'] = COMMASPACE.join(send_to)
        msg['Date'] = formatdate(localtime=True)
        msg['Subject'] = subject

        msg.attach( MIMEText(text) )

        for f in files:
                part = MIMEBase('application', "octet-stream")
                try:
                    part.set_payload( open(f,"rb").read() )
                except Exception, errormsg:
                    raise IOError("File not found: %s"%(errormsg))
                Encoders.encode_base64(part)
                part.add_header('Content-Disposition', 'attachment; filename="%s"' % os.path.basename(f))
                msg.attach(part)
    #end for f

        smtp = smtplib.SMTP(server)
        smtp.sendmail(send_from, send_to, msg.as_string())
        smtp.close()
#end send_mail
################################################

if __name__ == '__main__':
        main()
dbr
+7  A: 

mysqldump is a reasonable approach, but bear in mind that for some engines, this will lock your tables for the duration of the dump - and this has availability concerns for large production datasets.

An obvious alternative to this is mk-parallel-dump from Maatkit (http://www.maatkit.org/) which you should really check out if you're a mysql administrator. This dumps multiple tables or databases in parallel using mysqldump, thereby decreasing the amount of total time your dump takes.

If you're running in a replicated setup (and if you're using MySQL for important data in production, you have no excuses not to be doing so), taking dumps from a replication slave dedicated to the purpose will prevent any lock issues from causing trouble.

The next obvious alternative - on Linux, at least - is to use LVM snapshots. You can lock your tables, snapshot the filesystem, and unlock the tables again; then start an additional MySQL using a mount of that snapshot, dumping from there. This approach is described here: http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/

Jon Topper
+1  A: 

the correct way to run incremental or continuous backups of a mysql server is with binary logs.

to start with, lock all of the tables or bring the server down. use mysql dump to make a backup, or just copy the data directory. you only have to do this once, or any time you want a FULL backup.

before you bring the server back up, make sure binary logging is enabled.

to take an incremental backup, log in to the server and issue a FLUSH LOGS command. then backup the most recently closed binary log file.

if you have all innodb tables, it's simpler to just use inno hot backup (not free) or mysqldump with the --single-transaction option (you'd better have a lot of memory to handle the transactions).

longneck
+1  A: 

Binary logs are probably the correct way to do incremental backups, but if you don't trust binary file formats for permanent storage here is an ASCII way to do incremental backups.

mysqldump is not a bad format, the main problem is that it outputs stuff a table as one big line. The following trivial sed will split its output along record borders:

mysqldump --opt -p | sed -e "s/,(/,\n(/g" > database.dump

The resulting file is pretty diff-friendly, and I've been keeping them in a standard SVN repository fairly successfully. That also allows you to keep a history of backups, if you find that the last version got borked and you need last week's version.

DirkReiners
A: 

@Daniel,

in case you are still interested, there is a newish (new to me) solution shared by Paul Galbraith, a tool that allows for online backup of innodb tables called ibbackup from oracle which to quote Paul,

when used in conjunction with innobackup, has worked great in creating a nightly backup, with no downtime during the backup

more detail can be found on Paul's blog

jake
+2  A: 

You can make full dumps of InnoDB databases/tables without locking (downtime) via mysqldump with "--single-transaction --skip-lock-tables" options. Works well for making weekly snapshots + daily/hourly binary log increments (#Using the Binary Log to Enable Incremental Backups).

pigz
A: 

Sound like you are talking about transaction roll back.

So in terms of what you need, if you have the logs containing all historical queries, isn't that the backup already? Why do you need an incremental backup which is basically a redundant copy of all the information in DB logs?

If so, why don't you just use mysqldump and do the backup every once a while?

kavoir.com
+3  A: 

the Percona guys made a open source altenative to innobackup ...

Xtrabackup

https://launchpad.net/percona-xtrabackup/

Read this article about XtraDB http://www.linux-mag.com/cache/7356/1.html

jipipayo
A: 

This is a pretty solid solution for Linux shell. I have been using it for years:

http://sourceforge.net/projects/automysqlbackup/

  • Does rolling backups: daily, monthly, yearly
  • Lots of options
phirschybar