tags:

views:

26

answers:

1

I maintain a couple of low-traffic sites that have reasonable user uploaded media files and semi big databases. My goal is to backup all the data that is not under version control in a central place.

My current approach

At the moment I use a nightly cronjob that uses dumpdata to dump all the DB content into JSON files in a subdirectory of the project. The media uploads is already in the project directory (in media).

After the DB is dumped, the files are copied with rdiff-backup (makes an incremental backup) into another location. I then download the rdiff-backup directory on a regular basis with rsync to store a local copy.


Your Ideas?

What do you use to backup your data? Please post your backup solution - if you only have a few hits per day on your site or if you maintain a high traffic one with shareded databases and multiple fileservers :)

Thanks for your input.

+1  A: 

My backup solution works the following way:

  1. Every night, dump the data to a separate directory. I prefer to keep data dump directory distinct from the project directory (one reason being that project directory changes with every code deployment).

  2. Run a job to upload the data to my Amazon S3 account and another location using rsync.

  3. Send me an email with the log.

To restore a backup locally I use a script to download the data from S3 and upload it locally.

Manoj Govindan