views:

102

answers:

1

Is there any way to do automated profiling of unit tests when we run them via TeamCity?

The reason I'm asking is that while we should, and most of the time do, have focus on not creating performance-wise bad code, sometimes code slips through that seems to be OK, and indeed works correctly, but the routine is used multiple places and in some cases, the elapsed run-time of a method now takes 10x the time it did before.

This is not necessarily a bug, but it would be nice to be told "Hey, did you know? One of your unit-tests now takes 10x the time it did before you checked in this code.".

So I'm wondering, is there any way to do this?

Note that I say TeamCity because that's what will ultimately run the code, tools, whatever (if something is found), but of course it could be a wholly standalone tool that we could integrate ourselves.

I also see that TeamCity is gathering elapsed time statistics for our unit tests, so my thought was that perhaps there was a tool that could analyze that set of data, to compare latest elapsed time against statistical trends, etc.

Perhaps it's as "easy" as making our own test-runner program?

Has anyone done this, or seen/know of a potential solution for this?

+1  A: 

I'm running TeamCity Professional Version 4.5.5 (build 9103). Does the "test" tab under each individual build do what you need? I'm seeing statistical trends for each test as a function of either each build or averaged over time.

Dave Sims
Well, no, it doesn't. I know that it gathers the statistics I need, but I don't get any kind of warning of bad trends without having to go looking at all those graphs. I'd like an email or a report or whatnot that only contains things that are possible issues. In other words, I want something automated that looks at those graphs/numbers and warns me when certain patterns emerge, so that I don't have to go look at them manually.
Lasse V. Karlsen
when any test fail - whole build fail! Isn't it a warn? Also you can setup build fail notifications via settings tab
Sergey Mirvoda
also performance should be part of the test - each testing framework have it's own solution
Sergey Mirvoda