views:

353

answers:

6

Having looked at several available http server libraries I have not yet found what I am looking for and am sure I can't be the first to have this set of requirements.

I need a library which presents an API which is 'pipelined'. Pipelining is used to describe an HTTP feature where multiple HTTP requests can be sent across a TCP link at a time without waiting for a response. I want a similar feature on the library API where my application can receive all of those request without having to send a response (I will respond but want the ability to process multiple requests at a time to reduce the impact of internal latency).

So the web server library will need to support the following flow

1) HTTP Client transmits http request 1

2) HTTP Client transmits http request 2 ...

3) Web Server Library receives request 1 and passes it to My Web Server App

4) My Web Server App receives request 1 and dispatches it to My System

5) Web Server receives request 2 and passes it to My Web Server App

6) My Web Server App receives request 2 and dispatches it to My System

7) My Web Server App receives response to request 1 from My System and passes it to Web Server

8) Web Server transmits HTTP response 1 to HTTP Client

9) My Web Server App receives response to request 2 from My System and passes it to Web Server

10) Web Server transmits HTTP response 2 to HTTP Client

Hopefully this illustrates my requirement. There are two key points to recognise. Responses to the Web Server Library are asynchronous and there may be several HTTP requests passed to My Web Server App with responses outstanding.

Additional requirements are

  1. Embeddable into an existing 'C' application
  2. Small footprint; I don't need all the functionality available in Apache etc.
  3. Efficient; will need to support thousands of requests a second
  4. Allows asynchronous responses to requests; their is a small latency to responses and given the required request throughput a synchronous architecture is not going to work for me.
  5. Support persistent TCP connections
  6. Support use with Server-Push Comet connections
  7. Open Source / GPL
  8. support for HTTPS
  9. Portable across linux, windows; preferably more.

I will be very grateful for any recommendation

Best Regards

+1  A: 

You could try libmicrohttp.

Andrew Aylett
libmicrohttp does not seem to allow the mode of operation I need. I need to process multiple requests across a single connection simultaneously. libmicrohttp does not seem to permit this irrespective of the threading model. Can you confirm my understanding?
Howard May
Assuming you're meaning pipelining, this might be relevant: http://www.themes.freshmeat.net/projects/libmicrohttpd/releases/270855I don't know whether it will process pipelined requests simultaneously, but it SHOULD support pipelining for GET and HEAD requests.
Andrew Aylett
@Howard, I think it allows the mode of operation you need, if I assume parallel threads can help you out. (I am no fan of threads in general, but they are sometimes useful.)
Amigable Clark Kant
@Amigable, My application will handle thousands of requests a second and may have a latency of O(10ms) rising to O(100ms) or more when congested. This means I would need hundreds if not thousands of threads which I want to avoid.
Howard May
A: 

What you want is something that supports HTTP pipelining. You should make yourself familiar with that page if you are not already.

Yes, go for libmicrohttp. It has support for SSL etc and work in both Unix and Windows.

However, Christopher is right on the spot in his comment. If you have a startup time for each response, you are not going to gain much by pipelining. However, if you only have a significant response time to the first request, you may win something.

On the other hand, if each response has a startup time, you may gain a lot by not using pipelining, but create a new request for each object. Then each request can have its own thread, sucking up the startup costs in parallel. All responses will then be sent "at once" in the optimum case. libmicrohttp supports this mode of operation in its MHD_USE_THREAD_PER_CONNECTION thread model.

Amigable Clark Kant
Thanks Mr Kant. As explained above, while pipelining is necesary it is not sufficient. I need something which allows my application/system using the library to process multiple requests simultaneously. I don't believe libmicrohttp does.
Howard May
Updated answer with a new paragraph at the bottom.
Amigable Clark Kant
A: 
Christopher
Thanks for your comments Christopher.I may have of the order of 10 clients connecting, each of which will be placing a heavy load on the system.The latency is within my system. I am putting a web interface on an existing system built on a message passing communication. The latency is the time between receiving an HTTP request and sending an HTTP response. Unless I can receive multiple requests from a TCP connection at a time then each TCP connection will be restricted to a rate of 1/latency. I know this will not meet my requirements so need to address it now.
Howard May
A: 

Howard,

Have you taken a look at lighthttpd? It meets all of your requirements except it isn't explicitly an embedded webserver. But it is open source and compiling it in to your application shouldn't be too hard. You can then write a custom plugin to handle your requests.

Byron Whitlock
A: 

Can't believe no one has mentioned nginx. I've read large portions of the source-code and it is extremely modular. You could probably get the parts you need working pretty quickly.

Hassan Syed
A: 

uIP or lwip could work for you. I personally use uIP. It's good for a small number of clients and concurrent connections (or as you call it, "pipelining"). However, it's not as scalable or as fast at serving up content as lwip from what I've read. I went with simplicity and small size of uIP instead of the power of lwip, as my app usually only has 1 user.

I've found uIP pretty limited as the concurrent connections increase. However, I'm sure that's a limitation of my MAC receive buffers available and not uIP itself. I think lwip uses significantly more memory in some way to get around this. I just don't have enough ethernet RAM to support a ton of request packets coming in. That said, I can do background ajax polling with about a 15ms latency on a 56mhz processor.

http://www.sics.se/~adam/software.html

I've actually modified uIP in several ways. Adding a DHCP server and supporting multipart POST for file uploads are the big things.) Let me know if you have any questions.

Jeff Lamb