views:

3044

answers:

9

I'am interested in some data gathering about requests per second (the definition of requests per second i use is 'a user requests a certain functionality provided by an URL' (this means requests != hits (one request may consist of multiple hits))). So how many requests per second and server do you serve? If you dont maintain your own app, perhaps you know values from other apps.

Please provide us with a short description of the application (lots of reads, lots of write, Web2.0 stuff ...) and perhaps a source (if you correspond to a foreign app).

Thanks a lot, i'am really interested in this.

+3  A: 

Statistical data you won't really get here, just anecdotal evidence. But that may be interesting too.

Avi
My intention is to get a clearer view of this metric and therefor anecdotal evidence is also very welcome :)
Mork0075
+3  A: 

I'm working on a "PaaS" application used to build dynamic applications. Pretty much a glorified form builder (But a lot more complex).

The throughput varies, but I get anywhere between 100 and 1,000 requests per second depending on the complexity of the app. In most cases I get around 500.

(This is on a modern quad-core server)

Zoomzoom83
And this was measured with which kind of tool and which request type (one certain form?) or does it come from your logfile?
Mork0075
A combination of ab, tsung, and funkload.
Zoomzoom83
+3  A: 

Although this is a vague question, I can give one data point. Skeletal web service that uses XML POSTs with simple request, and simple XML response (both within 100 bytes size or so) could serve about 2500 requests per second per core on a standard Java servlet container (Jetty 6, Tomcat 6). For quad-core system I tested it with 9000 rps, by using enough remote clients to sature the server. The test had HTTP 1.1 connection reuse enabled, and due to modest number of distinct clients this could be more optimistic than real world usage.

But then again: once connected to use just a single external service (external data lookup that used a BDB), throughput dropped a lot, to perhaps 500 per core (limited by throughput of the backend service, not communication overhead) It is usually the case that external services (DBs etc) are what limits throughput, not simple request/response handling, or data serialization (as long as that's done properly).

In the end, the actual rate at which requests were received was much much lower; in range of couple of requests per second usually, and even peaks were barely more than 100 per second. Throttling was by clients unable to feed enough data.

StaxMan
+3  A: 

Another question asking similar things

OpenStreetMap seems to have 10-20 per second

Wikipedia seems to be 30000 to 70000 per second spread over 300 servers (100 to 200 requests per second per machine, most of which is caches)

Geograph is getting 7000 images per week (1 upload per 95 seconds)

OJW
+5  A: 

Sadly some people couldn't "believe" my results. So please delete my previous answer.

I'm now just posting the code for a client JUnit test, which is quite good for doing multi-threaded benchmarks, as I think.

Please use this code to test local or remote web resources as you like.

package helloworld;

import java.io.*;
import java.net.*;
import java.util.*;
import org.junit.*;

public class _ParallelFloodHttpRequests {

final static int BUFFER_SIZE = 4096;

final static int N_THREADS = 2;

final List<URL> urls;

final Object globalLock = new Object();

final int[] requestsPerThread = new int[N_THREADS];

final int[] ioExceptionsPerThread = new int[N_THREADS];

public _ParallelFloodHttpRequests()
    throws MalformedURLException {

  urls = Arrays.asList(new URL[]{
        new URL("http://0.0.0.0:8000/YOUR_LINK")
      });
}

public final static void inputStreamToOutputStream(InputStream is, OutputStream os)
    throws IOException {

  BufferedInputStream bis = new BufferedInputStream(is, BUFFER_SIZE);
  byte[] buffer = new byte[BUFFER_SIZE];

  int nRead;
  for (;;) {
    nRead = bis.read(buffer);
    if (nRead <= 0) { // can also be -1
      break;
    }
    os.write(buffer, 0, nRead);
  }

  bis.close();
  is.close();
}

public final static String urlToString(URL url)
    throws IOException {

  ByteArrayOutputStream baos = new ByteArrayOutputStream();
  inputStreamToOutputStream(url.openStream(), baos);
  baos.close();

  return baos.toString();
}

final class MyThread
    implements Runnable {

  private final int myId;

  private final List<URL> urls;

  private int ownRequests = 0;

  private int ownIoExceptions = 0;

  public MyThread(int myId, List<URL> inputUrls) {
    this.myId = myId;
    urls = new ArrayList<URL>();

    List<URL> tempUrls = new LinkedList<URL>(inputUrls);
    Random random = new Random(myId);

    for (int i = 0; i < inputUrls.size(); ++i) {
      URL url = tempUrls.remove(random.nextInt(tempUrls.size()));
      urls.add(url);
    }
  }

  @Override
  public void run() {
    for (int i = 1; i < Integer.MAX_VALUE; ++i) {
      try {
        doRequest(urls.get(i % urls.size()));
      } catch (IOException e) {
        ++ownIoExceptions;
      }

      ++ownRequests;

      if ((i % 100) == 0) {
        synchronized (globalLock) {
          requestsPerThread[myId] = ownRequests;
          ioExceptionsPerThread[myId] = ownIoExceptions;
        }

        Thread.yield();
      }
    }
  }

  private void doRequest(URL url)
      throws IOException {

    urlToString(url);
  }

}

@Test
public void multiThreaded()
    throws MalformedURLException {

  Runnable timer = new Runnable() {

    @Override
    public void run() {
      long start = System.currentTimeMillis(), current;

      try {
        for (;;) { // this is an endless loop, don't you know it?
          Thread.sleep(5000);

          int globalRequests = 0;
          int globalIoExceptions = 0;

          synchronized (globalLock) {
            for (int i = 0; i < N_THREADS; ++i) {
              globalRequests += requestsPerThread[i];
              globalIoExceptions += ioExceptionsPerThread[i];
            }

            System.out.print("requests per thread: ");
            for (int i = 0; i < N_THREADS; ++i) {
              System.out.print("   #" + i + ": " + requestsPerThread[i]);
            }
          }

          current = System.currentTimeMillis();
          int rate =
              (int) ((double) globalRequests / ((current - start) /
              1000.0));

          System.out.print("\n" + rate + " requests / sec, time: " +
              ((current - start) / 1000) + "s, requests: " +
              globalRequests + " ;   ");

          if (globalIoExceptions > 0) {
            System.out.print("IO exceptions so far: " + globalIoExceptions);
          }

          System.out.println();
        }
      } catch (InterruptedException e) {
      }
    }

  };

  Thread timerThread = new Thread(timer);
  Thread[] threads = new Thread[N_THREADS];

  for (int i = 0; i < N_THREADS; ++i) {
    threads[i] = new Thread(new MyThread(i, urls));
  }

  timerThread.start();
  for (int i = 0; i < N_THREADS; ++i) {
    threads[i].start();
  }

  try {
    for (int i = 0; i < N_THREADS; ++i) {
      threads[i].join();
    }

    timerThread.interrupt();
  } catch (InterruptedException e) {
  }
}

}

Output looks something like:

requests per thread:    #0: 11300   #1: 11100
4479 requests / sec, time: 5s, requests: 22400 ;   
requests per thread:    #0: 23000   #1: 25100
4809 requests / sec, time: 10s, requests: 48100 ;   
requests per thread:    #0: 35600   #1: 38500
4939 requests / sec, time: 15s, requests: 74100 ;

Please note, that the thread statistics are updated every 100 results (to prevent too often synchronization)

ivan_ivanovich_ivanoff
What does the website returns you're testing? A website like google.com, which is very clean and highly optimized takes about 200ms to return all bytes. You've said that you can server 10.000 requests/second, so one requests takes 0.1ms. I think were talking about different domains.
Mork0075
I was talking about a local test (benchmark client and server on same machine). The produced result was very simple: XML data under 1KByte. Marshalling was done using JAXB.The data I was testing can be seen in my blog post: http://unstablenightlytrunksnapshots.blogspot.com/2009/01/custom-uris.html
ivan_ivanovich_ivanoff
Oh, a down vote again! :'( If I only knew for what... ;)
ivan_ivanovich_ivanoff
+1  A: 

I'm developing a small browser game in C++, using FastCGI, and get about 180 requests per second on an older machine (Athlon 1800+), but I don't have any disk reads/writes so your mileage may vary.

tstenner
+2  A: 

Some benchmarks of C++ web system via PHP: http://cppcms.sourceforge.net/wikipp/en/page/benchmarks

Artyom
+1  A: 

Here is my anecdotal evidence.

I have a small data-driven ASP.NET 2.0 site that's running in my garage on an P4 2.0 GHz with 1GB of RAM. The PC is a really old Dell - purchased in 2001. It's running Windows XP SP2 (yes). The site is actually running inside a VM (Microsoft Virtual Server 2005) using Windows 2000 and is allocated 512 MB. The VM contains both the database (SQL Server 2000) and the site. The reason the site is running inside a VM is that I also use the PC for a media server.

I ran a couple of benchmarks a long time ago using, iirc, Application Stress Test utility (I believe, it used to come with Visual Studio). The results were that it could easily handle 30-40 simultaneous requests, which is more than enough for this type of site.

AngryHacker
+2  A: 

Although this is an old question, I'm going to add my two cents because I'm also looking for anecdotal evidence to this question. No one seems willing to answer these types of questions because there are just too many variables involved - hardware, software, connection speeds, caching, coding skill, app-pool settings. However, it is nice to see what can be accomplished when the planets align nicely.

I manage three identical web servers running IIS6 on Windows 2003. Each has two duel 64bit processors - that's four cores - and 2GB of RAM. Each server has the same 10 identical web sites and they all sit behind a load balancer to receive an equal number of requests.

Some of the sites serve ASPX, ASMX, or ASHX pages, others just serve static content. Some are legacy sites and receive few requests, less than 1 per minute. A couple sites receive the most requests, greater than 80% of the total. Each web site runs under its own application pool and the two busiest sites have a web garden configuration of two.

If I look at just ASP.NET Apps v2.0.5.727 Requests/Sec (excludes static content), each server is individually processing between 100 and 150 requests per second. I have no Request In Application Queue, so no backup is occurring. These ASP.NET pages have been greatly optimized using both the built-in caching mechanisms as well as manual caching where needed for fine tuning.

Charles