views:

1413

answers:

3

I'm having an issue with memcached. Not sure if it's memcached, php, or tcp sockets but everytime I try a benchmark with 50 or more concurrency to a page with memcached, some of those request failed using apache ab. I get the (99) Cannot assign requested address error.

When I do a concurrency test of 5000 to a regular phpinfo() page. Everything is fine. No failed requests.

It seems like memcached cannot support high concurrency or am I missing something? I'm running memcached with the -c 5000 flag.


Server: (2) Quad Core Xeon 2.5Ghz, 64GB ram, 4TB Raid 10, 64bit OpenSUSE 11.1

A: 

I'm using just a 4 byte integer, using it as a page counter for testing purposes. Other php pages works fine even with 5,000 concurrent connections and 100,000 requests. This server have alot of horsepower and ram, so I know that's not the issue.

The page that seems to die have nothing but 5 lines to code to test the page counter using memcached. Making the connection gives me this error: (99) Cannot assign requested address.

  • This problem start to arise starting with 50 concurrent connections.
  • I'm running memcached with -c 5000 for 5000 concurrency.
  • Everything is on one machine (localhost)
  • The only process running is SSH, Lighttpd, PHP, and Memcached
  • There are no users connected to this box (test machine)
  • Linux -nofile is set to 32000


That's all I have for now, I'll post more information as I found more. It seems like there are alot of people with this problem.

A: 

I just tested something similar with a file;

$mc = memcache_connect('localhost', 11211);
$visitors = memcache_get($mc, 'visitors') + 1;
memcache_set($mc, 'visitors', $visitors, 0, 30);
echo $visitors;

running on a tiny virtual machine with nginx, php-fastcgi, and memcached.

I ran ab -c 250 -t 60 http://testserver/memcache.php from my laptop in the same network without seeing any errors.

Where are you seeing the error? In your php error log?

xkcd150
+2  A: 

Ok, I've figured it out. Maybe this will help others who have the same problem.

It seems like the issue can be a combination of things.

  1. Set the sever.max-worker in the lighttpd.conf to a higher number Original: 16 Now: 32

  2. Turned off keep-alive in lighttpd.conf, it was keeping the connections opened for too long. server.max-keep-alive-requests = 0

  3. Change ulimit -n open files to a higher number. ulimit -n 65535

  4. If you're on linux use: server.event-handler = "linux-sysepoll" server.network-backend = "linux-sendfile"

  5. Increase max-fds server.max-fds = 2048

  6. Lower the tcp TIME_WAIT before recycling, this keep close the connection faster. In /etc/sysctl.conf add: net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_fin_timeout = 3

    Make sure you force it to reload with: /sbin/sysctl -p


After I've made the changes, my server is now running 30,000 concurrent connections and 1,000,000 simultaneous requests without any issue, failed requests, or write errors with apache ab.

Command used to benchmark: ab -n 1000000 -c 30000 http://localhost/test.php

My Apache can't get even close to this benchmark. Lighttd make me laugh at Apache now. Apache crawl at around 200 concurrency.