tags:

views:

376

answers:

6

I'm a big fan of regular, automatic backups. However, most of my experience is at a personal level. I've recently joined an organization (a university department) where almost nobody seems to have a backup strategy in place. I'd like to promote good backup practices, but aside from telling them what I do at a personal level, I don't quite know what to do.

Does anyone have any experience designing an organization-wide backup strategy? Particularly for an organization where some individuals are not that interested in doing backups?

A: 

@Will:

Almost all Windows boxes. Two or three folks running Macs, but nobody doing any other sort of *nix on the desktop.

Chris Upchurch
A: 

This is a super simple answer, but when I joined the team I am currently at, I implemented a simple backup system using batch files that ran on the hour and copied out to our NAS. It worked for our team because all of our programs start at a central directory that I could just backup at that point.

Ethan Gunderson
A: 

There's lots of organisations "where some individuals are not that interested in doing backups"

.. until they need to use them to restore their systems! (-:

Make the nay-sayers aware of the potential consequences of their apathy.

It's difficult I know, but convincing them of the effects of losing everything is going to well and tryly pay off in the future.

I'd suggest doing some reading on various disaster recovery scenarios and go from there.

At least they have someone who is interested enough to start discussing and pulicising this.

Good on you!

cheers,

Rob

Rob Wells
+1  A: 

Here's how I would (and have) go about it:

  1. Buy, beg, steal or borrow a fileserver. For normal office types, you shouldn't need any more than 100 MB/user. If you include email, up your estimate to between 500 MB - 1 GB/user. AutoCAD engineers, Photoshop junkies, and developers will require specialized handling. RAID 1 mirroring is required, HW is nice - but SW will do in a pinch.
  2. Get a tape drive. You can get a 160/320 DLTv4 drive for < $1000. If you're cash poor, but HW rich - you could setup a secondary box (doesn't need RAID) to backup to - but tapes are much easier.
  3. Use your choice of backup software to back it up. I like BackupExec when given a choice, but have resorted to the Windows included NTBackup when money is tight.
  4. When the size of your data exceeds the size of your tape, then you can either start pruning data or rotating tapes.
  5. Use a logon script, and give everybody a H: (Home) drive. Do not remap My Documents to it.

At this point, you have a reasonable file server setup with a decent backup strategy in place. You can fill in some holes with Windows shadow copies, incremental backups during the day, etc. for more "realtime" protection - but you've got the basics covered. Now comes the hard part - getting users.

  1. Tell users about the Home drive.
  2. Tell users about the backup strategy.
  3. Strongly remind them that their personal computer is not backed up, and is subject to crashing or being reformatted at any time.
  4. Follow up with any user (and their boss) when they don't use it. Either they have no data worth protecting, or are being stupid. Make their boss aware.
  5. Save the day when somebody needs their files restored.

The things you have to watch out for are trying to backup individual machines, or trying to give unlimited space. This could work with a technically minded group, but for most people you'll just be backing up 100GB of their cat pictures. You don't want to be buying tape drives and tapes for that.

Mark Brackett
+5  A: 

Does anyone have any experience designing an organization-wide backup strategy?

I run a network for a school, with a mix of Linux and Windows machines. We have around 800 pupils and 100 staff to cater for, which is probably larger than your University department, but the same general solutions are probably relevant.

There are (at least) two types of backups: disaster recovery (the building burns down, you need to get everything back up) and revision control (someone accidentally deletes a Word document and needs it back). I handle disaster recovery by having all our servers run as virtual machines, capable of having "snapshots" of their disk images taken. Snapshots of each server are taken and written to a removable harddrive (not necessarily every night, as that would take a lot of bandwidth, but every time you make a configuration change to the server), then removed and placed in a fireproof safe or taken off site (another advantage of using virtual machines: you can also encrypt data being taken off site). Each server VM is also mirrored in real time to a separate physical machine - if one machine goes down the other takes over right away.

Revision control is handled by a Python script that scans a file system and uploads changed files to a central server. The file system on that server makes extensive use of Unix-style symbolic links - i.e. only one copy of a given file is ever stored, subsequent copies are simply symlinked to. This allows you to have a full file system created for each day's backup but to only use a fraction of the actual disk space it uses (you just need enough space for any files changed since the last backup and to store all those symlinks). This is the general principle that things like the Mac's Time Machine system uses. Users needing to restore an old file can simply browse that file system.

All your Windows machines should hopefully be joined to a Windows domain. This should give them access to a network file area. That network area can be backed up easily enough with the methods above. At the school we set up the domain to stop users being able to store files locally on machines and force them to only use the network areas, but that might not be practical in your situation (your users might need to store big files that wouldn't be usable over your network connection). In that case you could run something like that backup script on each workstation, just get people to remember to leave their PCs on at night (or set them up so they can be turned on via their Ethernet cards and have the script switch them off afterwards if you want to save power).

David Hicks
A: 

I started using ZFS for home directories at work a few months ago and it works really well for us. I have rolling snapshots of all my files online while a script is run every night to copy the data to a different server.

Source code is stored under version control (Mercurial) and a remote machine is set to pull all changes (and clone new repos) every night.

Lester Cheung