views:

57

answers:

3

Hi,

We have a web based client-server product. The client is expected to be used in the upwards of 1M users (a famous company is going to use it).

Our server is set up in the cloud. One of the major questions while designing is how to make the whole program future proof. Say:

  1. Cloud provider goes down, then move automatically to backup in another cloud
  2. Move to a different server altogether etc

The options we thought till now are:

  1. DNS: Running a DNS name server on the cloud ourselves.
  2. Directory server - The directory server also lives on the cloud
  3. Have our server returning future movements and future URLs etc to the client - wherein the client is specifically designed to handle those scenarios

Since this should be a usual problem, which is the best solution for the same? Since our company is a very small one, we are looking at the least technically and financially expensive solution (say option 3 etc)?

Could someone provide some pointers for the same?

K

A: 

I would go for the directory server option. Its the most flexable and gives you the most control over what happens in a given situtaion.

To avoid the directory itself becoming a single point of failure I would have three or four of them running a different locations with different providers. Have the client app randomly choose one of the directoy urls at startup and work its way through them all until it finds one that works.

To make it really future proof you would probably need a simple protocol to dynamicly update the list of directory servers -- but be careful if this is badly implemented you will leave your clients open to all sorts of malicious spoofing attacks.

James Anderson
A: 

Re. DNS: requests can be cached, and it might take a while for the changes to propagate themselves (hours to days).

I'd go for a list of prioritized IPs that can be updated on the client. If one IP fails, the client would retry with 2nd, 3rd and so on.

Flavius Stef
A: 

I'm not sure I 100% understood your question, but if I did it boils down to: if my server moves, how can my clients find it?

That's exactly what DNS did in nearly the last three decades.

Every possible system you could choose would need to be bootstrapped with initial working data: address for a directory server, address of a working server to get an updated list of addresses, etc. That's what the root dns servers are for and OS vendors will do the bootstrapping part for you.

Sure DNS queries could be cached, that's how it is supposed to work and how it scales to internet size. You control the caching (read about the TTL) and you can usually keep it on sane values (doesn't make sense to keep it shorter than the absolute minimum time needed to re-deploy the server somewhere else).

Luke404