views:

545

answers:

6

How do i measure how long a client has to wait for a request.

On the server side it is easy, through a filter for example. But if we want to take into accout the total time including latency and data transfer, it gets diffcult.

is it possible to access the underlying socket to see when the request is finished? or is it neccessary to do some javascript tricks? maybe through clock synchronisation between browser and server? are there any premade javascripts for this task?

A: 

You could set a 0 byte socket send buffer (and I don't exactly recommend this) so that when your blocking call to HttpResponse.send() you have a closer idea as to when the last byte left, but travel time is not included. Ekk--I feel queasy for even mentioning it. You can do this in Tomcat with connector specific settings. (Tomcat 6 Connector documentation)

Or you could come up with some sort of javascript time stamp approach, but I would not expect to set the client clock. Multiple calls to the web server would have to be made.

  • timestamp query
  • the real request
  • reporting the data

And this approach would cover latency, although you still have have some jitter variance.

Hmmm...interesting problem you have there. :)

Stu Thompson
+3  A: 

If you want to measure it from your browser to simulate any client request you can watch the net tab in firebug to see how long it takes each piece of the page to download and the download order.

RedWolves
+2  A: 

There's no way you can know how long the client had to wait purely from the server side. You'll need some JavaScript.

You don't want to synchronize the client and server clocks, that's overkill. Just measure the time between when the client makes the request, and when it finishes displaying its response.

If the client is AJAX, this can be pretty easy: call new Date().getTime() to get the time in milliseconds when the request is made, and compare it to the time after the result is parsed. Then send this timing info to the server in the background.

For a non-AJAX application, when the user clicks on a request, use JavaScript to send the current timestamp (from the client's point of view) to the server along with the query, and pass that same timestamp back through to the client when the resulting page is reloaded. In that page's onLoad handler, measure the total elapsed time, and then send it back to the server - either using an XmlHttpRequest or tacking on an extra argument to the next request made to the server.

dmazzoni
the problem with this approach is, it cannot be added transparently to an existing non-ajax implementation.it will be tricky to add the extra date parameter to every request made, but this method sounds like it would really measure the _whole_ time it takes to travel from client-server-client.
Andreas Petersson
+3  A: 

You could wrap the HttpServletResponse object and the OutputStream returned by the HttpServletResponse. When output starts writing you could set a startDate, and when it stops (or when it's flushed etc) you can set a stopDate.

This can be used to calculate the length of time it took to stream all the data back to the client.

We're using it in our application and the numbers look reasonable.

edit: you can set the start date in a ServletFilter to get the length of time the client waited. I gave you the length of time it took to write output to the client.

ScArcher2
but this does not take into account the socket buffer nor the latency that the OP mentions :(
Stu Thompson
this looks promising, i'll give it a shot.
Andreas Petersson
just tried this out and it did not work. bytes to the output stream seem to be written long before they leave the physical machine.
Andreas Petersson
A: 

Check out Jiffy-web, developed by netflix to give them a more accurate view of the total page -> page rendering time

Dave Cheney
A: 

I had the same problem. But this JavaOne Paper really helped me to solve this problem. I would request you to go thru it and it basically uses javascript to calculate the time.

Shamik