views:

51

answers:

1

i was wondering on how these databases
which have over millions of records and
millions of lookups per second soo fast.
how are they optimised?
are there any special servers hosting these databases? how are they scaled?

+1  A: 

large applications are scaled out across multiple servers. Your ID is usually hashed into a bucket of some finite number (ex. 50) and these 50 server can handle 1,000's of requests per second. Usage is presumed to be split fairly evenly over all 50 servers. Backup and recovery are a bit more complicated in that "some" users may lose data while others won't.

No Refunds No Returns
are there any special databases or normal databases? and what about server architecture?
Amitd
a large site link bing.com will have 100,000+ servers built using off-the-shelf commodity parts.
No Refunds No Returns