tags:

views:

66

answers:

3

Hi,

This could be a question for serverfault as well, but it also includes topics from here.

I am building a new web site that consist of 6 servers. 1 mysql, 1 web, 2 file processing servers, 2 file servers. In short, file processing servers process files and copy them to the file servers. In this case I have two options;

I can setup a web server for each file server and serve files directly from there. Like, file1.domain.com/file.zip. Some files (not all of them) will need authentication so I will authenticate users via memcache from those servers. 90% of the requests won't need any authentication.

Or I can setup NFS and serve files directly from the web server, like www.domain.com/fileserve.php?id=2323 (it's a basic example)

As the project is heavily based on the files, the second option might not be as effective as the first option, as it will consume more memory (even if I split files into chunks while serving)

The setup will stay same for a long time, so we won't be adding new file servers into the setup.

What are your ideas, which one is better? Or any different idea?

Thanks in advance,

+3  A: 

Just me, but I would actually put a set of reverse proxy rules on the "web server" and then proxy HTTP requests (possibly load balanced if they have equal filesystems) back to a lightweight HTTP server on the file servers.

This gives you flexibility and the ability implement future caching, logging, filter chains, rewrite rules, authentication &c, &c. I find having a fronting web server as a proxy layer a very effective solution.

Xepoch
This would generally be my choice, too. Just one caveat: Please be careful, your future self and/or your successors will thank yo! It's annoyingly easy to end up with nasty, hard-to-maintain messes if you don't exercise care in designing things like proxy and rewrite rules.
Nicholas Knight
Sorry for my question as I have not setup any reverse proxy for web servers. So, do you mean I won't setup anything like NFS, when someone request www.domain.com/file.zip , it will proxy the request to file1.domain.com/file.zip, directly from HTTP protocol. I know this design in theory and would like to practice it by using nginx as a reverse proxy, but I wonder what are the handicaps of this design, especially from the memory point of view. How it will effect to memory consumption on the web server (file servers will consume same memory as the first option I guess)
murat
@murat I think you're stating this, but the HTTP request isn't for warded but rather streamed through the front reverse proxy. You would thus not incur NFS but HTTP on the network. You can also do stuff like offload GZIP compression on the front web server. Resource consumption I would not imagine would be any worse than NFSoTCP serving. Most reverse proxies stream the data through passing headers and all while optionally parsing the return, but if you're serving binary, that would be of no consequence.
Xepoch
hmm, I see. I will look more into this setup.Thanks everyone!
murat
+2  A: 

I recommend your option #1: allow the file servers to act as web servers. I have personally found NFS to be a little flaky when used under high volume.

Asaph
+1 for flaky NFS under high volume.
Benjamin Cox
yes, I read that on the internet.. thanks.
murat
A: 

You can also use Content Delivery Network such as simplecdn.com, they can solve bandwidth and server load issue.

Nazariy
Thanks for the suggestion but CDNs are overkill for us, because the project is a local project, and we have very good deals with local datacenters.
murat