tags:

views:

100

answers:

4

I'm trying to speed test jetty (to compare it with using apache) for serving dynamic content.

I'm testing this using three client threads requesting again as soon as a response comes back. These are running on a local box (OSX 10.5.8 mac book pro). Apache is pretty much straight out of the box (XAMPP distribution) and I've tested Jetty 7.0.2 and 7.1.6

Apache is giving my spikey times : response times upto 2000ms, but an average of 50ms, and if you remove the spikes (about 2%) the average is 10ms per call. (This was to a PHP hello world page)

Jetty is giving me no spikes, but response times of about 200ms.

This was calling to the localhost:8080/hello/ that is distributed with jetty, and starting jetty with java -jar start.jar.

This seems slow to me, and I'm wondering if its just me doing something wrong.

Any sugestions on how to get better numbers out of Jetty would be appreciated.

Thanks

A: 

Speedup or performance tune any application or server is really hard to get done in my experience. You'll need to benchmark several times with different work models to define what your peak load is. Once you define the peak load for the configuration/environment mixture you need to tune and benchmark, you might have to run 5+ iterations of your benchmark. Check the configuration of both apache/jetty in terms of number of working threads to process the request and get them to match if possible. Here are some recommendations:

  1. Consider the differences of the two environments (GC in jetty, consider tuning you min and max memory threshold to the same size and later proceed to execute your test)
  2. The load should come from another box. If you don't have a second box/PC/server take your CPU/core into count and setup your the test to a specific CPU, do the same for jetty/apache.
  3. This is given that you cant get another machine to be the stress agent. Run several workload model

Moving to modeling the test do the following 2 stages:

  1. One Thread for each configuration for 30 minutes.
  2. Start with 1 thread and going up to 5 with a 10 minutes interval to increase the count,
  3. Base on the metrics Stage 2 define a number of threads for the test. and run that number of thread concurrent for 1 hour.

Correlate the metrics (response times) from your testing app to the server hosting the application resources (use sar, top and other unix commands to track cpu and memory), some other process might be impacting you app. (memory is relevant for apache jetty will be constraint to the JVM memory configuration so it should not change the memory usage once the server is up and running)

XecP277
A: 

Be aware of the Hotspot Compiler.

Methods have to be called several times (1000 times ?), before the are compiled into native code.

ckuetbach
A: 

You should definitely check it with profiler. Here are instructions how to setup remote profiling with Jetty:

http://sujitpal.sys-con.com/node/508048/mobile

Peter Knego
+4  A: 

Well, since I am successfully running a site with some traffic on Jetty, I was pretty surprised by your observation.

So I just tried your test. With the same result.

So I decompiled the Hello Servlet which comes with Jetty. And I had to laugh - it really includes following line:

 Thread.sleep(200L);

You can see for yourself.

My own experience with Jetty performance: I ran multi threaded load tests on my real-world app where I had a throughput of about 1000 requests per second on my dev workstation...

the.duckman
Hillarious! I wonder what that sleep is for?
Tom Anderson
well done Jetty
unbeli
Awesome find the.duckman - what would I need to do to get a version of that without the sleep in it running - I don't see a HelloWorld.java in my or anything relating to HelloWorld in my jetty distribution - (It probably obvious, but I'm a java newbie)
Michael Anderson
the.duckman
Thx the.duckman. I ended up just following the other HelloWorld tutorials in the documentation. Results were kinda between 1ms and 3ms, with some bug spikes similar to my apache problems. But the much higher average response rate is promising.
Michael Anderson