views:

4565

answers:

16

I need a webserver to serve up very simple POST/GET requests as JSON. I don't need MVC, Rails, Django. I need something that takes up very little memory, preferrably around 5K per reqeust. The webserver will talk to backend services like Scribe using Facebook Thrift. Each http request will also access a SQLLite database, one for each user and user's data do not overlap. It will serve up static html files as well as the json webservice. I am considering the following: Njinx with PHP, Kepler from Lua, rolling my own with libevent or libev perhaps calling out to Lua, or MochiWeb. Which of these options are best and what other options are out there? I can use PHP, python, or Lua for basic scripting and even could do basic C. I am leaning towards some sort of Erlang solution.

+7  A: 

Lighttpd has an excellent footprint, to the extent that most of your memory will probably taken up by whatever language you choose to use (unless you go the C route, which is really not recommended).

Edward Z. Yang
+1  A: 
Harry Tsai
+17  A: 

http://nginx.net/

Hands down

jonnii
Seconded. Nginx is very fast, especially for serving static content.
Bob Somers
+1  A: 

Since you mentioned Python, you might want to take a look at web.py, for a very simple way to listen on port 80 and map URLs to actions.

It'll also run via your favorite CGI if you want to pair with a standard webserver (i.e. behind Nginx/FastCGI) -- and I'll second the recs of Nginx for massive concurrency on static files. (They used it with Lighttpd at Reddit.)

thttpd is the other webserver I'd look at, especially if memory is extremely scarce, like on an embedded system.

joelhardi
+1 for web.py. It's stupidly well done for lightweight POST/GET/PUT/DELETE apps and an overall makes it easy to organize a RESTful architecture.
I GIVE TERRIBLE ADVICE
+6  A: 

Mochiweb is super lightweight, and handles a stupidly high load.

madlep
+3  A: 

Yeah I just ran some little tests comparing mochiweb, ruby mongrel and nginx on my macbook pro. I put a little load of 10 conn/sec for approx 30 secons on each of them and of course as expected Mongrel couldn't take it, not having concurrency with a proxy or cluster.

Test: httperf --hog --client=0/1 --server=localhost --port=8000 --uri=/ --rate=10 --send-buffer=4096 --recv-buffer=16384 --num-conns=10000 --num-calls=100

  • Mochiweb: Total: connections 304 requests 30306 replies 30304 test-duration 30.307 s
  • Nginx: Total: connections 355 requests 35500 replies 35500 test-duration 35.499 s
  • Mongrel: Total: connections 317 requests 634 replies 317 test-duration 31.658 s

So duh, Nginx and Mochiweb can both handle a light load. Nginx was much faster at a sprint to 10,000 requests (concurrency 1) however.

Test: httperf --hog --client=0/1 --server=localhost --port=80 --uri=/ --send-buffer=4096 --recv-buffer=16384 --num-conns=10000 --num-calls=1

  • Mochiweb: Total: connections 10000 requests 10000 replies 10000 test-duration 7.418 s
  • Nginx: Total: connections 10000 requests 10000 replies 10000 test-duration 4.317 s

Hmm, but can Nginx handle this type of load? A Millon-user Comet application.

John Wright
The comparison with Mongrel isn't quite fair. You'd usually run a pack of Mongrel processes to handle multiple concurrent processes. Mongrel is inherently single threaded, and not designed to be run that way.
madlep
+1  A: 

Another followup here. Part of my original questions was motivated by just finding something that let me serve up JSON super fast and under huge loads and in a REST oriented way and then gets out of the way. I found this promising framework called WebMachine that is built on top of Mochiweb that looks promising.

http://code.google.com/p/webmachine/

I will try to let everybody know what I ultimately decide to use.

John Wright
+5  A: 

John,

As one of the authors of Webmachine, I'm happy to help you out. One reason I'm following up is that even though there's no JSON-related code in Webmachine, you might find it useful to know that we use it on a daily basis for processing many different JSON requests and responses. It's simple, cleanly extensible, and performs reasonably well.

If you just wanted static delivery, then something like nginx or lighttpd would be an obvious way to go. For a mix of static and dynamic requests and built-in good Web behavior, you may find Webmachine a good fit.

Check out the trivial example code at http://code.google.com/p/webmachine/wiki/ExampleResources and the recent posts on the blog at http://blog.therestfulway.com/ for more information.

It has worked out well for us; if you have questions feel free to drop me a line.

-Justin

Justin Sheehy
+2  A: 

Cherokee webserver at www.cherokee-project.com

In my tests, the Cherokee reverse-proxy module was much faster than the one in Nginx.
Kurt
A: 

So all of the above can perform well, but how do each of the lightweight webserver perform memory footprint wise?

And how do you measure the footprint?

DEzra
A: 

Take a look at klone at koanlogic.com site ... being targeted at embedded systems it's very small, and incidentally very fast too: http://john.freml.in/teepeedee2-vs-klone . It can be scripted in C/C++ (ultra performant) or usual PHP/CGI (a lot less performant), depending on skills/taste ...

babongo
A: 

To measure the footprint, have a look at the executable size (don't forget shared libraries).

TrustLeap G-WAN (102 KB, no dependencies) offers ANSI-C scripts.

Have a look at these benchmarks:

Windows XP Pro SP3 and Windows Vista 64-bit:

http://www.gwan.ch/en_windows.html

Linux Ubuntu 8.1 is also tested on the Web site's front page.

Pierre
A: 

If could code in C or C++, I think lighttz would be the fastest and uses the least memory. However, the reason why it is so is because it's using libev and it has absolutely nothing, no php support, no html support - nothing. All it provides is a call back function where u handle each http request. You're are gonna have to parse the http GET/POST request and return the html as a string. You can see it being benchmarked against nginx, lighttpd, apache etc and come up on top (link).

Hao Wooi Lim
A: 

WIndows IIS rulez tha scene, man!!! All the other stuff is just a joke. IIS is the absolut coolest webserver ever! I do not know what you other guyz are talking about ngix or mushiweb stuff, I believe you do not have a clue about what is the hottest thing out there, because not mentioning IIS is like not knowing about where the start button is in vista. If you want a really extreme cool experience that makes you feel very modern and up2date, then you should definitely use IIS. Also you will be always on the best and most secure server with a system that has proven to be the best computer system in the world. We run 2-3 mio bots on this architecture and have no reason to be unhappy.

Ergonzö Perifros Macaleo Dump
A: 

You could have a look at FAPWS (Fast Asynchronous Python WSGI server). The philosophy of the project match perfectly your needs. http://www.fapws.org

William
A: 

The fastest embedded web-server hands down is Snorkel - checkout there web site, they destroyed nginx in my testing using ab. http://sites.google.com/site/snorkelembedded

Walt C.