views:

192

answers:

3

My website is growing. I need to partition feature sets to different group of servers instead of having all features run on a single server copy. I have 8 identical servers in a Windows Network Load Balancing setup.

The problem is: I need to keep the URL space the same. What would be a workable solution? I don't want to start new domains for the feature sets. I am thinking some kind of reverse proxy with some URL based rewriting/routing capabilities(?!?). Any recommendations in terms of software or hardware? This is going to replace the WNLB setup as it does not have the capabilities I need.

Thanks.

A: 

Would this help?

http://www.visolve.com/squid/whitepapers/reverseproxy.php

S.Lott
+1  A: 

There are indeed several solutions to implement load balancing:

  1. DNS Round-Robin
  2. DNS Load Balancing (with lbnamed)
  3. Proxy Round-Robin
  4. Hardware/TCP Round-Robin

I understood 1) and 2) are not an option here so... if you have money and really high performance needs, go for 4). Else, go for 3).

For Proxy Round-Robin, again, several solutions are possible: Apache mod_rewrite, Apache mod_proxy, Squid (and surely many others I don't know).

  • For "dumb" load balacing, there is an example in Apache mod_rewrite's URL Rewriting Guide (see Proxy Throughput Round-Robin section).

  • Apache mod_proxy can act as proxy to connect clients to the internet but is usually used as reverse proxy to redirect an url to another server. It has no cache functionality (but can be used with mod_cache, and mod_rewrite...).

  • Squid is a proxy cache and is usually used to connect clients to the internet. But it can also be used as reverse proxy and be configured to cache the requests and to accelerate content delivery.

As you can see, choosing one of them depends on what and how you want to proxy. In your case, I would consider running Apache mod_proxy or Squid if you want caching on Linux (if it is an option).

Regarding the hardware, I'm not a specialist but I think a "small" to "medium" dedicated server should be enough. Just don't forget all the requests will go through this machine so its sizing highly depends on your traffic which seems decent. This might require some digging with real life data.

Pascal Thivent
A: 

If you have 8 servers I would suggest using 7 of them to share the work load and using one front-end server to act as a proxy.

Your front end server could run Apache and use mod_proxy to delegate each HTTP request to one of the 7 back-end servers. You can set mod_proxy to delegate the work based on the incoming URL if you need to so you could, for example, have one server serving video, two serving blogs entries and the rest doing whatever else.

Your front-end server would be extremely light: running Apache and little else. You could also cache your static content on this front server using Squid or Apache's mod_cache so that requests for images, js, css and other static content will never hit your back end 7 server once it has been cached.

EDIT: I just went a read Pascal's comment and he's suggesting the same thing. I'll mod him +1

Steve Claridge