views:

1673

answers:

7

I have currently 200+ GB database that is using the DB2 built in backup to do a daily backup (and hopefully not restore - lol) But since that backup now takes more than 2.5 hours to complete I am looking into a Third party Backup and Restore utility. The version is 8.2 FP 14 But I will be moving soon to 9.1 and I also have some 9.5 databases to backup and restore. What are the best tools that you have used for this purpose?

Thanks!

A: 

It's not a "third-party" product but anyone that I have ever seen using DB2 is using Tivoli Storage Manager to store their database backups.

Most shops will set up archive logging to TSM so you only have to take the "big" backup every week or so.

Since it's also an IBM product you won't have to worry about it working with all the different flavors of DB2 that you have.

The downside is it's an IBM product. :) Not sure if that ($) makes a difference to you.

Michael Sharek
We do not have a license for TSM and I will not get funding to buy that I am sure it's very expensive + my foray into Tivoli apps has been very bad experiences so far... Too expensive and not user friendly at ALL!!!!!!
Autobyte
A: 

I doubt that you can speed things up using another backup tool. As Mike mentions, you can add TSM to the stack, but that will hardly make the backup run any faster.

If I were you, I'd look into where backup files are stored: Are they using the same disk spindles as the database itself? If so: See if you can store the backup files on a storage area which isn't contented for access during your backup window.

And consider using incremental backups for daily backups, and then a long full backup on Saturdays.

(I assume that you are already running online backups, so that your data aren't unavailable during backup.)

Troels Arvin
Yes I run online backups but I have never had success in my tests restoring incremental backups in db2 so I don't trust them. As far as backing up to a different spindle, my DB is stored on a Clarion SAN and the backup is currently going to local disk which is SCSI 15K RPM disks. - thanks!
Autobyte
I've never had trouble restoring from incremental backups (There is an AUTO argument to the RESTORE...INCREMENTAL command which is all that's needed.)
Troels Arvin
+2  A: 

One thing that will help is going to DB2 version 9 and turn on compression. The size of the backup will then decrease (by up to 70-80% on table level) which should shorten the backup time. Of course, if your database is continuosly growing you'll soon run into problems again, but then data archiving might be the thing for you.

A: 

A third party backup package probably won't help your speed much. Making sure that you are not doing full backups every 2 hours is probably the first step.

After that, look at where you are writing your backup to. Is it a local drive, instead of a network drive? Are the spindles used for anything else? Backups don't involve a lot of seek activity, but do involve a lot of big writes, so you probably want to avoid RAID 5 and go for large stripe sizes to help maximize throughput.

Naturally, you have to do full backups sooner or later, but hopefully you can find a window when load is light and you can live with a longer time period between backups. Do your full backup during a 4-6 hour period when the normal incrementals are off and then do incrementals based off of that the rest of the time.

Also, until you get your backup copied to a completely separate system you really aren't backed up. You'll have to experiment to figure out if you're better off compressing it before, during or after sending.

edebill
A: 

Before looking at third party tools, which I doubt would help too much, I would consider a few optimizations.

1) Have you used REORG on your tables and indexes? This would compact the information and minimize the amount of pages used;

2) If you can, backup on multiple disks at the same time. This can easily be achieved by running db2 backup db mydb /mnt/disk1 /mnt/disk2 /mnt/disk3 ...

3) DB2 should do a good job at fine tuning itself, but you can always experiment with the WITH num_buffers BUFFERS, BUFFER buffer-size and PARALLELISM n options. But again, usually DB2 does a better job on its own;

4) Consider performing daily incremental backups, and a full backup once on Saturdays or Sundays;

5) UTIL_IMPACT_PRIORITY and UTIL_IMPACT_LIM let you throttle the backup process so that it doesn't affect your regular workload too much. This is useful if your main concern is not the time per se, but rather the performance of your datasever while you backup;

6) DB2 9's data compression can truly do wonders when it comes to reducing the dimensions of the data that needs to be backed up. I have seen very impressive results and would highly recommend it if you can migrate to version 9.1 or, even better, 9.5.

Antonio Cangiano
+1  A: 

There really are only two ways to make backup, and more important recovery, run faster: 1. backup less data and/or 2. have a bigger pipe to the backup media

I think you got a lot of suggestions on how to reduce the amount of data that you back up. Basically, you should be creating a backup strategy that relies on relatively infrequent full backup and much more frequent backups of changed (since last full backup) data. I encourage you to take a look at the "Configure Automatic Maintenance" wizard in the DB2 Control Center. It will help you with creating automatic backups and with other utilities like REORG that Antonio suggested. Things like compression obviously can help as the amount of data is much lower. However, not all DB2 editions offer compression. For example, DB2 Express-C does not. Frankly, doing compression on a 200GB database may not be worth while anyway and that is precisely why free DBMS like DB2 Express-C don't offer compression.

As far as openign a bigger pipe for your backup you first have to decide if you are going to backup to disk or to tape. There is a big difference in speed (obviously disk is a lot faster). Second, DB2 can paralelize backups. So, if you have multiple devices to back to, it will backup to all of them at the same time i.e. your elapsed time will be a lot less depending how many devices you have to throw at the problem. Again, DB2 Control Center can help you have it set up.

Leon Katsnelson
A: 

Try High Performance Unload (HPU) - this was a standalone product from Infotel is now available as part of the Optim data studio - posting here https://www.ibm.com/developerworks/mydeveloperworks/blogs/idm/date/200910?lang=en