views:

2076

answers:

5

We're working on a Ruby on Rails app that needs to take advantage of html5 websockets. At the moment, we have two separate "servers" so to speak: our main app running on nginx+passenger, and a separate server using Pratik Naik's Cramp framework (which is running on Thin) to handle the websocket connections.

Ideally, when it comes time for deployment, we'd have the rails app running on nginx+passenger, and the websocket server would be proxied behind nginx, so we wouldn't need to have the websocket server running on a different port.

Problem is, in this setup it seems that nginx is closing the connections to Thin too early. The connection is successfully established to the Thin server, then immediately closed with a 200 response code. Our guess is that nginx doesn't realize that the client is trying to establish a long-running connection for websocket traffic.

Admittedly, I'm not all that savvy with nginx config, so, is it even possible to configure nginx to act as a reverse proxy for a websocket server? Or do I have to wait for nginx to offer support for the new websocket handshake stuff? Assuming that having both the app server and the websocket server listening on port 80 is a requirement, might that mean I have to have Thin running on a separate server without nginx in front for now?

Thanks in advance for any advice or suggestions. :)

-John

+2  A: 

Out of the box (i.e. official sources) Nginx can establish only HTTP 1.0 connections to an upstream (=backend), which means no keepalive is possibe: Nginx will select an upstream server, open connection to it, proxy, cache (if you want) and close the connection. That's it.

This is the fundamental reason frameworks requiring persistent connections to the backend would not work through Nginx (no HTTP/1.1 = no keepalive and no websockets I guess). Despite having this disadvantage there is an evident benefit: Nginx can choose out of several upstreams (load balance) and failover to alive one in case some of them failed.

Alexander Azarov
Got it, thanks. Essentially then, what I'm trying to do is currently impossible. Maybe someday nginx will support HTTP/1.1 keepalives to backends, but for now I'll have to come up with an alternate solution. Thanks for the response.
John Reilly
+2  A: 

How about Nginx with the new HTTP Push module: http://pushmodule.slact.net/. It takes care of the connection juggling (so to speak) that one might have to worry about with a reverse proxy. It is certainly a viable alternative to Websockets which are not fully in the mix yet. I know developer of the HTTP Push module is still working on a fully stable version, but it is in active development. There are versions of it being used in production codebases. To quote the author, "A useful tool with a boring name."

Eric Lubow
Thanks, that's a good suggestion. We actually were using that very module to achieve server push for a while, but now we're wanting to support bi-directional communication... And since we only need to support webkit browsers for our application, we're hoping to go with a pure websocket approach now. But I appreciate the response! :)
John Reilly
+7  A: 

You can't use nginx for this currently, but I would suggest looking at HAProxy. I have used it for exactly this purpose.

The trick is to set long timeouts so that the socket connections are not closed. Something like:

timeout client  86400000 # In the frontend
timeout server  86400000 # In the backend

If you want to serve say a rails and cramp application on the same port you can use ACL rules to detect a websocket connection and use a different backend. So your haproxy frontend config would look something like

frontend all 0.0.0.0:80
  timeout client    86400000
  default_backend   rails_backend
  acl websocket hdr(Upgrade)    -i WebSocket
  use_backend   cramp_backend   if websocket

For completeness the backend would look like

backend cramp_backend
  timeout server  86400000
  server cramp1 localhost:8090 maxconn 200 check
mloughran
This is great, thank you! I haven't used HAProxy before, but I've always been meaning to learn. Looks like I've got a good reason to do so now. :)
John Reilly
A: 

I use nginx to reverse proxy to a comet style server with long polling connections and it works great. Make sure you configure proxy_send_timeout and proxy_read_timeout to appropriate values. Also make sure your back-end server that nginx is proxying to supports http 1.0 because I don't think nginx's proxy module does http 1.1 yet.

Just to clear up some confusion in a few of the answers: Keepalive allows a client to reuse a connection to send another HTTP request. It does not have anything to do with long polling or holding connections open until an event occurs which is what the original question was asking about. So it doesn't matter than nginx's proxy module only supports HTTP 1.0 which does not have keepalive.

Mark Maunder
A: 

How about use my module for nginx_tcp_proxy_module(http://github.com/yaoweibin/nginx_tcp_proxy_module)?

This module is designed for general TCP proxy with Nginx. I think it's also suitable for websocket. And I just add tcp_ssl_module in the development branch.

yaoweibin
You **think**, but haven't tested it with WebSocket?
Jonas