tags:

views:

1320

answers:

2

I'm currently running Nginx + PHP-FPM for serving ads on OpenX. Currently my response times are horrible, even during times of low load. However, my CPU and Memory resources are fine, so I can't seem to figure out what the bottleneck is.

My current config for nginx and php-fpm is: worker_processes 20; worker_rlimit_nofile 50000;

error_log /var/log/nginx/error.log; pid /var/run/nginx.pid;

events { worker_connections 15000; multi_accept off; use epoll; }

http { include /etc/nginx/mime.types;

access_log  /var/log/nginx/access.log;

sendfile        on;
tcp_nopush     off;

keepalive_timeout  0;
#keepalive_timeout  65;
tcp_nodelay        on;

gzip  on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
gzip_comp_level 2;
gzip_proxied    any;
gzip_types    text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;

}

server { listen 80; server_name localhost; access_log /var/log/nginx/localhost.access.log;

Default location

location / {
    root   /var/www;
    index  index.php;
}

Parse all .php file in the /var/www directory

location ~ .php$ {
    fastcgi_pass   localhost:9000;
    fastcgi_index  index.php;
    fastcgi_param  SCRIPT_FILENAME  /var/www$fastcgi_script_name;
    include fastcgi_params;
    fastcgi_param  QUERY_STRING     $query_string;
    fastcgi_param  REQUEST_METHOD   $request_method;
    fastcgi_param  CONTENT_TYPE     $content_type;
    fastcgi_param  CONTENT_LENGTH   $content_length;
    fastcgi_ignore_client_abort     off;
}

PHP-FPM: rlimit_files = 50000 max_children = 500

I only included the PHP-FPM paramaters I've changed for PHP-FPM.

Does anyone have any tips on how I can optimize it so I can serve more requests? I'm seeing horrendous response times right now.

Thanks

+1  A: 

do you have 20 processors or cores on your machine? also maybe try events with the default for your OS... maybe more fcgi processes instead of more nginx... probably starting with 2 - 4 nginx workers is enough...

Todd
+1  A: 

You should definitely reduce the number of workers as I doubt you have 20 cores/processors. Additionally, I'd look into your database server, there's a big possibility that the problem is there.

Additionally you've upped the worker_rlimit_nofile to 50000, I hope you know that operating system usually set the limit to 1024 (default), you can try to request what's the current limit by typing ulimit -n

You can raise hard limit of NOFILE (number of open files) by executing this command ulimit -n 50000 in init.d or visit this other question on stackoverflow to learn how to use limits.conf to permanently set limits system wide.

Adam Benayoun