views:

36

answers:

2

Lets be honest, most software that developers produce has quite modest performance requirements. E.g. Systems perhaps serving 100's of requests per second, if that.

But lets assume for a moment (or even dream) that you where perhaps involved in the "next big thing" (whatever that means) and you wanted to put some sort of performance statistics logging in place to help you out when all those users come flying in.

Performance statistics logging, how would you approach this requirement? Perhaps you would use some sort of generic framework for this? Or roll your own solution? What would you log? How granular?

Or would you not even bother putting anything in place and rather deal with this issue when it actually became an issue?

It would be really interesting to hear your thoughts on this topic.

A: 

i'm just throwing this out there: log the time of request events. Also log the size of packet sent. Stack all the event logs. Then replay them. Use counters, then do a differential over time to get the rate and you can graph the result and observe patterns. basically delta bytes per seconds is a good alarm.

yan bellavance
+1  A: 

I asked the same question to myself recently. I developped my own stats counter stuff. But I'm not completly happy with the results (too much heap consumed for stats, made a bad choice when decided to choose a memory model for the storage).

The question you have to answer is: how often will I have a look at these stats?

In my case, not very often (that's why the memory-only storage is a bad choice for me).

I was wondering if I should move the storage model to something like jrobin (a java impl of the Round Robin Database model).

I also recently discovered the perf4j (http://perf4j.codehaus.org/) project

Christophe Furmaniak
perf4j looks interesting. The logs are going to get pretty massive, so I wonder if there are tools available that can help you aggregate this information into something useful?
tinny