views:

29

answers:

1

In the old (single-threaded) days we instructed our testing team to always report the CPU time and not the real-time of an application. That way, if they said that in version 1 an action took 5 CPU seconds, and in version 2 it took 10 CPU seconds, that we had a problem.

Now, with more and more multi-threading, this doesn't seem to make sense anymore. It could be that the version 1 of an application takes 5 CPU seconds, and version 2 10 CPU seconds, but that version 2 is still faster if version 1 is single-threaded, and version 2 uses 4 threads (each consuming 2.5 CPU seconds).

On the other hand, using real-time to compare performance isn't reliable either since it can be influenced by lots of other elements (other applications running, network congestion, very busy database server, fragmented disk, ...).

What is in your opinion the best way to 'numerate' performance? Hopefully it's not intuition since that is not an objective 'value' and probably leads to conflicts between the development team and the testing team.

+1  A: 

Performance needs to be defined before it is measured.

Is it:

  • memory consumption?
  • task completion times?
  • disk space allocation?

Once defined, you can decide on metrics.

Oded
Good point. Problem is that testing people tend to report anything that increases: CPU Time, resource usage, memory, ... Time to discuss this with the testers. Thanks.
Patrick