tags:

views:

3628

answers:

8

I need some help from some linux gurus. I am working on a webapp that includes a comet server. The comet server runs on localhost:8080 and exposes the url localhost:8080/long_polling for clients to connect to. My webapp runs on localhost:80.

I've used nginx to proxy requests from nginx to the comet server (localhost:80/long_polling proxied to localhost:8080/long_polling), however, I have two gripes with this solution:

  1. nginx gives me a 504 Gateway time-out after a minute, even though I changed EVERY single time out setting to 600 seconds
  2. I don't really want nginx to have to proxy to the comet server anyway - the nginx proxy is not built for long lasting connections (up to half an hour possibly). I would rather allow the clients to directly connect to the comet server, and let the comet server deal with it.

So my question is: is there any linux trick that allows me to expose localhost:8080/long_polling to localhost:80/long_polling without using the nginx proxy? There must be something. That's why I think this question can probably be best answered by a linux guru.

The reason I need /long_polling to be exposed on port 80 is so I can use AJAX to connect to it (ajax same-origin-policy).

This is my nginx proxy.conf for reference:

proxy_redirect              off;                                                                                                                         
proxy_set_header            Host $host;
proxy_set_header            X-Real-IP $remote_addr;
proxy_set_header            X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size        10m;
client_body_buffer_size     128k;
proxy_connect_timeout       600;
proxy_send_timeout          600;
proxy_read_timeout          600;
proxy_buffer_size           4k;
proxy_buffers               4 32k;
proxy_busy_buffers_size     64k;
proxy_temp_file_write_size  64k;
send_timeout                600;
proxy_buffering             off;
+3  A: 

i don't think, that is possible ...

localhost:8080/long_polling is a URI ... more exactly, it should be http://localhost:8080/long_polling ... in HTTP the URI would be resolved as requesting /long_polling, to port 80 to the server with at the domain 'localhost' ... that is, opening a tcp-connection to 127.0.0.1:80, and sending

GET /long_polling HTTP/1.1
Host: localhost:8080

plus some additional HTTP headers ... i haven't heard yet, that ports can be bound accross processes ...

actually, if i understand well, nginx was designed to be a scalable proxy ... also, they claim they need 2.5 MB for 10000 HTTP idling connections ... so that really shouldn't be a problem ...

what comet server are you using? could you maybe let the comet server proxy a webserver? normal http requests should be handled quickly ...

back2dos
Well, if nginx uses that small amount of memory, I would love to get it working with nginx as the front proxy server then.Just need to get rid of that "504 Gateway Time-Out" error that occurs after about a minute (always happens after about 55 - 65 seconds).Thanks for your answer.I am using node.js as a COMET server.
Chris
A: 

without doing some serious TCP/IP mungling, you can't expose two applications on the same TCP port on the same IP address. once nginx has started to service the connection, it can't pass it to other application, it can only proxy it.

so, either user another port, another IP number (could be on the same physical machine), or live with proxy.

edit: i guess nginx is timing out because it doesn't see any activity for a long time. maybe adding a null message every few minutes could keep the connection from failing.

Javier
Ok - I guess the linux guru's have spoken :) It's just not easily possible. Then I'll have to figure out then how to make nginx not time out then. Thanks so much! If anyone reading this has any idea why nginx might be timing out, let me know!
Chris
+2  A: 

Try

proxy_next_upstream error;

The default is

proxy_next_upstream error timeout;

The timeout cannot be more than 75 seconds.

http://wiki.nginx.org/NginxHttpProxyModule#proxy_next_upstream

http://wiki.nginx.org/NginxHttpProxyModule#proxy_connect_timeout

z8000
On a related note, I stumbled upon your email to the nginx mailing list in which you mention that you are using node.js. You are *exactly* one step ahead of me in my own project. I'm also planning on using node.js behind nginx. Please keep us posted how this works out!I have a couple of node.js projects on github FWIW:http://github.com/fictorial/
z8000
Hi Brian! node.js is sweet. The COMET server was easy to write, and is behaving well. Not sure yet how it'll hold up in production.
Chris
Are you at all interested in sharing what you've worked on? A COMET server running behind a production-ready reverse proxy like nginx (posted to the node.js Google group) would be a GREAT way to get more people interested in node.js and would also help me! ;)
z8000
Re: production use... You will find that node.js does have a memory leak right now. See http://groups.google.com/group/nodejs/browse_thread/thread/a8d1dfc2fd57a6d1/9a8c7b2add4c3257#9a8c7b2add4c3257
z8000
+4  A: 

I actually managed to get this working now. Thank you all. The reason nginx was 504 timing out was a silly one: I hadn't included proxy.conf in my nginx.conf like so:

include /etc/nginx/proxy.conf;

So, I'm keeping nginx as a frontend proxy to the COMET server.

Chris
could you post your nginx.conf and proxy.conf for reference?
z8000
+3  A: 

Here's my nginx.conf and my proxy.conf. Note however that the proxy.conf is way overkill - I was just setting all these settings while trying to debug my program.

/etc/nginx/nginx.conf

worker_processes  1;                                                                                                                                     
user www-data;

error_log  /var/log/nginx/error.log debug;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include /etc/nginx/proxy.conf;

    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    access_log  /var/log/nginx/access.log;

    sendfile        on;
    tcp_nopush     on;

    keepalive_timeout  600;
    tcp_nodelay        on;

    gzip  on;
    gzip_comp_level 2;
    gzip_proxied any;
    gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

/etc/nginx/proxy.conf

proxy_redirect              off;                                                                                                                         
proxy_set_header            Host $host;
proxy_set_header            X-Real-IP $remote_addr;
proxy_set_header            X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size        10m;
client_body_buffer_size     128k;
proxy_connect_timeout       6000;
proxy_send_timeout          6000;
proxy_read_timeout          6000;
proxy_buffer_size           4k;
proxy_buffers               4 32k;
proxy_busy_buffers_size     64k;
proxy_temp_file_write_size  64k;
send_timeout                6000;
proxy_buffering             off;
proxy_next_upstream error;
Chris
thanks for sharing it!
z8000
+1  A: 

There is now a Comet plugin for Nginx. It will probably solve your issues quite nicely.

http://www.igvita.com/2009/10/21/nginx-comet-low-latency-server-push/

Phill Kenoyer
A: 

why are setting buffers:

proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k;

and after turn's its off

proxy_buffering off;

?

Timon
A: 

You might want to try listen(80) on the node.js server instead of 8080 (i presume you are using that as an async server?) and potentially miss out Ngnix altogether. I use connect middleware and express to server static files and deal with caching that would normally be handled by Ngnix. If you want to have multiple instances of node running (which I would advise), you might want to look into node.js itself as a proxy / load balancer to other node instances rather than Nginx as your gateway. I ran into a problem with this though when I was serving too many static image files at once but after I put the images on S3 it stabilized. Nginx MAY be overkill for what you are doing. Try it and see. Best of luck.

dryprogrammers