I've been starting to write many more unit tests for my code (something I should have been doing for much longer,) as well as use code profilers such as EQATEC's to identify bottle necks. I'm wondering if there's a correct method for monitoring performance in unit tests?
Obviously, a code profiler would be best for optimization, but what I'm really looking for is a way to make sure my most recent changes didn't accidentally kill my performance (by calling redundant functions, etc.,) even if it didn't break any of the logic.
My first thought is to run my method (for example, an insert
or some sort) many times to come up with the number of ticks it usually takes. Then write a unit test that will repeat the method calls, then Assert(elapsedTicks < magicNumberOfTicks)
. This seems too arbitrary, though, and I'm wondering what other developers have used in this situation?