views:

48

answers:

2

I've been starting to write many more unit tests for my code (something I should have been doing for much longer,) as well as use code profilers such as EQATEC's to identify bottle necks. I'm wondering if there's a correct method for monitoring performance in unit tests?

Obviously, a code profiler would be best for optimization, but what I'm really looking for is a way to make sure my most recent changes didn't accidentally kill my performance (by calling redundant functions, etc.,) even if it didn't break any of the logic.

My first thought is to run my method (for example, an insert or some sort) many times to come up with the number of ticks it usually takes. Then write a unit test that will repeat the method calls, then Assert(elapsedTicks < magicNumberOfTicks). This seems too arbitrary, though, and I'm wondering what other developers have used in this situation?

A: 

Here you might find a useful tool of your choice.

Regards.

StudiousJoseph
The question is about .NET code.
tster
+1  A: 

I would really suggest have separate regression tests and unit tests. The purpose of unit tests is code confidence, will it work. They should be quick and simple. To do real performance testing you need a large enough sample size to make the results worth while. Doing this in unit tests would make the unit tests take too long, and then you may feel the need to not run them all the time because they take too long.

For regression testing though a good tool to use (i know there are probably better tools) but a simple excel macro should be sufficient. And you have the added bonus of already having your data in a spread sheet that you can easily compare runs for many tests.

But if you still want to performance testing into your unit tests. The magic number idea might be your best idea, assuming the assert error message includes the expected and actual time that will give you a quick indication of how far off the run times are as well.

Jack