Everyone uses source-code control to manage versions (right?) and this provides some level of backup. There are times, however, when your local copy is out of sync with the repository. Moreover, some sandbox-type projects may not have yet ;-) made it into SCC.

EDIT: I have multiple projects in my projects directories. Not all are in current development but anyone of which might need to be "fixed" whenever a bug is found. Restoring a single, active project from SCC seems perfectly reasonable. Restoring all of the couple of dozen projects that I support from SCC seems less reasonable than restoring from a backup and syncing as necessary from SCC.

What backup strategies, other than source code control, do you use to keep your code safe?

A similar question can be found at, but I'm more interested in hearing others' personal strategies if you happen to work in an organization that has no overal strategy. I'll provide my strategy in an answer.

+3  A: 

I use Microsoft SyncToy 2.0 to synchronize my project directories with a folder on a network share. I have separate scheduled tasks that run different SyncToy scripts for the various directories (broken down by Visual Studio version).

+26  A: 

My strategy is always check in, and backup the entire repository.

I never leave anything out of source control and I make sure regular backups (incremental daily, full weekly and monthly rotation) are happening and are functional.

Vinko Vrsalovic
excellent advice.
Do you check in if your tests don't pass? I typically don't and so can occasionally leave for the day without having checked in everything.
In TFS, there is the "Shelve Pending Changes..." feature, which allows you to "check-in" w/o affecting the branch you are in.
Greg Ogle
+1 for Shelving - it's an excellent feature.
I hadn't thought of that. I've been using my SyncToy script since before starting to use TFS. If you had put that in an answer I would have upvoted it.
Yes, I also check in broken code (in a special branch or whatever the current source control software calls the concept).
Vinko Vrsalovic
You didn't answer HOW you backup the entire repository (mechanisms used - script, 3rd party program, etc) and where it goes.
Milan Gardian
The how depends on the current environment. Currently I use SVN as source control and BackupPC to backup the repo and other things. The BackupPC storage gets copied by a crontab script to a portable disk I carry home.
Vinko Vrsalovic
If you set up your source control repository to allow for private user branches, then you can commit as needed to a private area and promote when you feel confident to a common team or integration branch
Peter Kahn
+1  A: 

We also check everything in (check in early and often) and back up the entire repository (CVS) with tar and ftp it to our backup server.

Jim Blizard
+6  A: 

(In addition to source control to a remote server) I use the free version of SyncBack ( and this batch file: (where the arguments to syncback.exe specify previously configured syncback backup profiles)

@echo off

echo Stop and start SQL Server
echo -------------------------

net stop "SQL Server (SQLEXPRESS)"
net stop "SQL Server (SQLSERVER2008)"
echo -----------------------------------------------------------
echo Back up running now... please wait.

"C:\Program Files\2BrightSparks\SyncBack\SyncBack.exe" c e-contents f-contents

echo Backing up done. Starting SQL Server...
echo -----------------------------------------------------------

net start "SQL Server (SQLEXPRESS)"
net start "SQL Server (SQLSERVER2008)"

echo -----------------------------------------------------------
echo Back up is done and SQL Server is running now.
echo -----------------------------------------------------------


with two 8gb flash drives every day. At the end of the week, I do the same thing but then target a desktop external drive.

SyncBack is great!

+5  A: 

In my opinion, rebuilding everything from SCC every now and then is anyway a good practice (during night time, for example). Doing so makes sure that you haven't forgotten to add any essential file to the repository. The whole procedure should anyway require at maximum a couple of steps.

Implementing continuous integration at the server side will actually ensure that this happens at build time (which is at check-in for us).
If the full build times are too long, it is a good idea to have a full rebuild every night, and an incremental build at check-in (or polling at 10-15 minute intervals).
+1  A: 

When your writing something that doesn't (yet) belong in the main build, create a branch. When it should go into the main build, merge your branch with it.

Distributed VCSes also make local branches really easy, the central repository will never know they existed.

Backing up a local repository (of a distributed VCS) by pushing changes onto a remote copy is so trivial that I use git as my main method of backup for most documents, configuration files, basically anything non-binary.

+3  A: 

For everything but the most simple 5 min test things I use version control, Subversion in my case.

I've used some old hardware which I run linux on and a Subversion server which I commit to. Then I have a cron script archiving the repository (if changed since last time) every night and attaching it in a mail to my gmail account with the changelog in the body. With the 20 mb attachment limit on Gmail all but the most binary intensive repositories may be backed up with splitting the files.

I plan to rework this to put the backups on Amazon S3 but haven't got around to do it yet.

Most important thing IMHO is to always have a backup someplace else (geographically), not just on a USB-drive or something.

In case of very small 5 min. tests I put them in my DropBox (

+1  A: 

if you diverge from your source control for significant amounts of time, then you need some distributed source control.

+2  A: 

Though this is a subjective answer, I think you are not using source control properly.

Yes, your local copy is often out-of-sync with the repository, but any given change should only be a small amount of work (eg, you shouldn't have stuff that is not checked in for days on end). If you are committing often, then in the case of a drive loss (theft/failure/etc) you lose a small amount (typically <1 day) of work.

If you are doing something totally crazy that is disruptive to other developers, then you should be working in a branch. When you're done, merge your changes back.

You should also be able to rebuild the project from your SCC system at any time. It's a good thing to do from time to time, just to make sure that everything you do need to build is in SCC - sometimes files get missed, and you never notice because you always build from the working copy that you've been using for the past 6 months.

Actually, I do try to check in daily, but have you ever had your wife call and ask "why aren't you home yet", you look at the clock and realize that you're an hour late, your tests still aren't passing, and you can't check in until they do? Daily check-ins are nice in theory, but there are times...
Daily? I check in probably 3-5 times/day. Basically every time I complete a "thought" and have something working, it gets checked in. Even if it's just a couple lines in a method in one class, or a few minor changes in some comments.
Adam Jaskiewicz
I also check in multiple times per day, the point was sometimes there is broken code that you don't want to check in when you leave.
If it's in my own branch, I sure do.
Adam Jaskiewicz
+2  A: 

I use Mercurial as my version control system. I use the repository on my windows laptop as my main repository but use Mercurial's clone feature to back it up to my ubuntu server every two or three days. I also use sync toy to back up important directorys to a flash drive including the copy of the repository found on my laptop.

+8  A: 

At the end of the day I check my code into source control.

At around midnight Mozy kicks on and backs up my code off site.

At around 1AM the SC box gets backed up to tape.

At around 3AM Syncback SE wakes up and backs up my code to an external HD.

Throughout the day my work box syncs with my home box using Live Sync

you can't be too safe, can you
I guess it does seem like a bit much when I look at it but I had a buddy lose quite a lot of important files through a series of unlikely disasters despite having them on 2 boxes.
that's insane :) Downside of this if one of these locations compromised your source code will be in torrent websites in no time (assuming the source code is private)
dr. evil
True thats why I encrypt everything before it leaves. Mozy does Blowfish on the client side and Syncback has a similar option.
+2  A: 

I use Unison to replicate my entire home directory on two different machines at home. This way if I am sloppy or if I have 20-year-old files not under source control (.emacs) I still have a measure of protection. I also replicate everything except personal files (photos, music) on a machine at work as well.

Norman Ramsey
+6  A: 

OSX's Time Machine


Since I use TFS (Team Foundation Server) I just back up the SQL Server database as any other database I use

Juan Manuel
+2  A: 

Version Control (SVN) is more than enough for me. Yet, there are some rules:

  • I commit as often as possible (4-6 hours of work without committing already start creating this tingling sensation of something going wrong).
  • SVN structures of solutions are always atomic. You just need a fresh CheckOut in order to be able to run "rebuild-copy-package" integration script on any solution (running tests might require providing DB connection settings before that).
  • SVN server is reliable and backed up regularly.
  • Changes are being propagated between the different solutions composing the application (i.e. from open source shared library to the internal code that leverages it) only via the commits (integration server picks this up and creates packages that could be used in the solutions down-stream).
  • Sandbox projects (prototypes) are always kept in Prototypes folder of SVN (sibling of Trunk or Tags) being named "YYYY-MM-DD PrototypeName"
Rinat Abdullin
+1  A: 

I'm working on a product called "Transactor Code Agent", which is designed to do just what you are asking for.

It provides local backup and version control for your source files.

It lets you use your existing source control setup for what it was meant for (managing "mostly completed" work by multiple developers over multiple releases), while providing you with an automated backup and local file version control for your in-progress work.

The beta should be coming out sometime in January.

You can see our "website" (it's a little rough) at

There's a form there you can use to signup for the private beta.


Here's a little bit more information, based on some feedback I got in the comments:

1) Do I have a thing againt source control?

No! I think source control is a wonderful thing. When used properly it provides a tremendous tool for managing the software life cycle.

But, when it is used properly, source control leaves a big gap, in that it doesn't protect a developers work until it's finished. What's needed is something that focuses on the in-progress work of individual programmers. Code Agent does that.

To put it differently, source control is a tool designed to make your boss's life easier (because it helps to manager features and changes and teams and versions over time)

Code Agent is a tool designed to make your life easier (because it makes sure that your work is always saved).

Scott Wisniewski
So, is your software really a compliment to source control? It sounds more like an insult! :-)
Vinko Vrsalovic
BTW, I think you should add to your website a paragraph explaining how does Transactor compare to a distributed VCS.
Vinko Vrsalovic
Hi Vinko,Thanks for the feedback. No I don't mean to insult source control at. I love source control. It's just there are a few things that source control doesn't do. My tool is designed to fill that gap.I will update the web site.
Scott Wisniewski
Vinko,I updated the web site. You should take a look when you get a chance.
Scott Wisniewski
The insult comment was a pun on 'compliment' (which is what you have on your page) v/s 'complement' (which is what you actually mean). It does look better now.
Vinko Vrsalovic
Ah... it was one of the situations where I just wasn't smart enough to realize I was being teased....Thanks for being nice enough to dumb it down for me.
Scott Wisniewski

In case anything should fail, I will sometimes e-mail myself important pieces I am working on to webmail accounts such as yahoo or hotmail. I know everyone is talking about switching from paper to digital, but sometimes you never know what is going to happen, so I will print out hard copies. Obviously, this is not the best solution especially for a large project, so I will usually restrict hard copies to smaller, more important pieces. I also tend to be a bit paranoid, so I will end up taking a backup of a backup of a backup.

+1  A: 

Code which isn't checked in (and hence backed up) to your VCS, does not exist. It's no more real than code which you just have in your head. It really is that simple.

+2  A: 


... well, at least not automatically

Sync tools, such as unison are synchronizing two (or more) locations. Thus, if you accidentally mess up a file in one location, the mess will be transformed to another one and you won't notice.

+1 for sync != backup
bidirectional syncs are dangerous for backups. rsync (and others) are perfectly good, and very efficient
SyncToy has a unidirectional option, which is what I use.
I agree with the post,I think you should use my tool, "Transactor Code Agent", to do this.
Scott Wisniewski
+3  A: 

no one wants to have to rebuild their entire project directory from SCC if the disk drive dies

Huh? We always do it this way. In fact we have a build server that continuously performs fresh builds from a clean checkout. If restoring from a backup seems to be a better way than restoring from the SCC, you need to improve your SCC.

For all code that is not ready for production we have a directory called "playground" and "junk" in the SCC.

A single project yes, but I've got a couple of dozen different projects, not all of them currently in development, but which I refer to at different times. Probably would be different if I only had one or two projects to check out. I could check them all back out as needed, but backup seems easier
I may be wrong, but I think that either one or both among your source control software or your knowledge of it suck, tvanfosson :)
Vinko Vrsalovic
we have more than 100 projects at our company, some of them quite large (take about 45 minutes to compile). We use subversion for everything, and it works really well. The only thing we backup is the subversion server.

I use rdiff-backup to perform a daily incremental backup of my laptop over SSH. It uses deltra compression (like rsync) so it's very quick. It also lets you go back any number of days in the backed up data so that you can go back to right after you finished some complicated code, but before you accidentally deleted it all.

It's a little tricky to get going but well worth it in my opinion.


Aside from subversion, I use crashplan for online offsite backups. It can also backup to local storage and other computers (though unfortunately seems to require the same backup set to be stored at each destination currently - i.e. can't store a smallish set of vital stuff offsite, and a larger set locally.)

I also use unison (for things too large to backup offsite - music, movies etc), and an OSX time capsule, so that in the event of data loss, I can hopefully restore without resorting to online backup. The online backup is intended for a disaster like the house burning down or being burgled.


There are times, however, when your local copy is out of sync with the repository. Moreover, some sandbox-type projects may not have yet ;-) made it into SCC.

Firstly, you should really try to minimize the time your code is 'out' of SCC. Not for backup purposes, but for keeping track of what was done when, and particularly why. commit comments are invaluable. A big checkin containing 3000 files with the message "Initial Revision" is not very useful.

The sandbox projects argument does hold some weight, but then you should just treat it the same way you treat all your other files. Back them up to an external USB drive, or whatever. If you're not backing up all your other files, I suggest you start now.

And, of course, no one wants to have to rebuild their entire project directory from SCC if the disk drive dies -- much better to just restore from an actual backup.

Isn't this just svn checkout? Why not just 'rebuild from SCC' ?

Orion Edwards
See my edit -- it a couple of dozen different projects in different stages of life. With my sync solution I can get all back in one shot if the drive dies instead of having to copy and rebuild all.

Subversion: server is managed by Beanstalk, client uses Tortoise SVN. After every coding session, everything goes back to the SVN repository so I never have to worry about losing code. I also periodically back up the latest code to a CD and lock it in a vault just to be sure!

Also, keep in mind that your code is only one part of the equation. Most modern development environments require significant customization in and of themselves (integrating 3rd-party tools being the obvious example but just installing the IDE with appropriate options takes quite awhile as well). Thus, I also do all of my development in a virtual machine that I can easily back up to an external hard drive. These get locked in the vault as well.

Finally, I backup a "reference" database to complete the picture. I cannot just backup the schema because, in my product, there is significant system data kept in the database (e.g. content delivered with the web site).

Mark Brittingham

I use MozyPro to automate an off site backup of the current code on my machine as well as the source code control database. This runs incrementally every night.


I use a product (that I wrote, it's my micro-isv), called Transactor Code Agent. Its a backup tool designed specifically for programmers.

It watches your source code, and every time you save a change it backs it up and keeps a local history for you.

I think it works a lot better for backup than source control does for several reasons:

  1. Source Control is meant to be a change management tool, to help your program move from one consistent state to another
  2. You don't have to worry about maintaing a private branch
  3. You don't have to interrupt your work to make checks purely for backup purposes. You can just focus on writing code, and checkin your stuff when its done.

You can download a demo of it here:

Scott Wisniewski

Online (Internet) backups are an important part of the process.

All kind of backups to external drives are doomed to failure unless they are made by an appointed personal (such as a secreatry). If you're a very small shop (or a µ-ISV, such as me), this is not an option. Even then, where is the external drive kept? A safe with fire protection is the only possible good answer. Storing them offsite is not good: People WILL forget to bring it back to office for the periodical back-up.

Backups to NAS are IMHO a better solution than external drives. But the day the building is on fire, offsite backups are your only chance to stay alive.

I personnaly use Mozy to back up the main local directories in addition to the SCC DB.

Needless to say that AES-256 or similar encryption is a must have for storage of your source code on someone else's hard drives. Mozy and all its serious competitors offer it.

Serge - appTranslator

joel from joelonsoftware said in some post that if building and deploying your project takes more than two command lines (or more than one minute in preparations) you are doing it wrong. i completely agree with him, and i think a SCM should be enough. backups systems are just for catastrophic disasters (hdd failures, fire and tornados).