views:

35

answers:

1

I have a webapp in development. I need to plan for what happens if the host goes down. I will lose some very recent session status (which I can live with) and everything else should be persistently stored in the database.

If I am starting up again after an outage, can I expect a good host to reconstruct the database to within minutes or seconds of where I was up to, or should I build in a background process to continually mirror the database elsewhere?

What is normal/sensible?

Obviously a good host will have RAID and other redundancy, so the likelihood of total loss should be low, and if they have periodic backups, I should lose only very recent stuff but this is presumably likely to be designed with almost static web content in mind, and my site is transactional with new data being filed continuously (with a customer expectation that I don't ever lose it).

Any suggestions/advice?

Are there off-the-shelf frameworks for doing this? (I'm primarily working in Java.) Should I just plan to save the data or should I plan to have an alternative usable host implementation ready to launch in case the host doesn't come back up in a suitable timeframe?

+1  A: 

You need a replication strategy which of course depends on your database engine. It's usually done by configuration. http://en.wikipedia.org/wiki/Replication_%28computer_science%29

I've experience with Informix there you can setup data replication to have a standby system available or do full backup the data, and replay logical-logs (which contain basically all SQL-Statement) which needs more time to recover from a crash.

Having a redundant storage is also a good idea on case of a disc crashes. This topic is probably better discussed on serverfault.com

stacker
Thanks "stacker" for the pointer to serverfault.com
Griff