views:

126

answers:

5

I'm reading Osherove's "The Art of Unit Testing," and though I've not yet seen him say anything about performance testing, two thoughts still cross my mind:

  • Performance tests generally can't be unit tests, because performance tests generally need to run for long periods of time.
  • Performance tests generally can't be unit tests, because performance issues too often manifest at an integration or system level (or at least the logic of a single unit test needed to re-create the performance of the integration environment would be too involved to be a unit test).

Particularly for the first reason stated above, I doubt it makes sense for performance tests to be handled by a unit testing framework (such as NUnit).

My question is: do my findings / leanings correspond with the thoughts of the community?

A: 

Performance tests might very well be made up of unit tests.

For example, a unit test might throw several different parameters into a method and verify the method returns an expected output. A performance test might execute that unit test 1000 times (or whatever value makes sense for you) while recording everything from CPU and memory counters right down to how long each test took.

Chris Lively
A: 

I agree with your findings/learnings. True unit tests only test a portion of the system, ignoring, mocking or faking the rest as necessary. Integration tests (or regression tests) test most or all of the units working together, and that is the true measure of performance.

Kaleb Brasee
I've often called the system-wide tests "functional tests." I don't know how that jibes with "the industry," but in my company the vernacular stuck. It was a bit tricky to get into place, though, since we needed some "external" system to be able to drive the application. The application ran on dedicated hardware and inputs came from custom ports that were not feasible to drive directly from the host PC. It was definitely worth it at the end, though.
dash-tom-bang
+1  A: 

In some situations you can use unit tests to make sure that an operation finishes within a certain time period. If you want to add more features to your operation, but you don't want to sacrifice performance you can use unit tests to assert that. Of course, these kind of unit tests are machine dependent, but you can throw some additional variables or configuration to the equation.

serega
+1  A: 

All depends of what you call performance testing.When micro optimizing specific code I usually use something very similar to unit testing (should I call it unit performance testing ?). That's basically what I do in this question (though not caring there to really use a unit test framework). But I also do this kind of things to optimize my C++ production code within BOOST unit testing framework.

Really there is many kind of performance testing at different levels and with different purposes (heavy-load stress test, profiling, micro optimization). The performance testing you are speaking of in your question seems to be at the functional testing level. A level for which you probably won't use unit testing framework anyway.

kriss
A: 

Unit tests should take no time to execute because you are only testing a very specific unit / system. Like if your system under test is ClassA : IClassA, you do your mocking / stubbing and only test the behaviour of ClassA, and should not be testing behaviour other than ClassA, such as if ClassA uses ClassB. You should inject a mock of ClassB instead of the concrete to achieve this.

In terms of performance tests, it makes sense to still use a testing framework like NUnit / MBUnit / MavenThought, just keep these tests in a separate assembly and don't invoke them as part of your unit tests.

So if you use Rake to invoke your tests, some of your tasks might look like:

Rake Test:All         #Run all unit tests
Rake Test:Acceptance  #Run all acceptance tests
Rake Test:Performance #Run all performance tests
Rake Test:Integration #Run all integration tests

Then with your continuous integration, Test:All, is always invoked after a successful build, where as Test:Performance is invoked at 12am once a day.

Sean B
I like what you are saying, but again, the performance tests I need to run are actually integration tests - such as communication speed between a client and a server. Correct me, but I don't think a unit testing framework is prepared to deploy the client and server code, launch them separately, and give results of what the client saw for results when beating on the server.
Brent Arias
You could have the server implementation built and deployed to a specified location (virtual box, or remote server), and then all the performance/integration tests would be executed against that location. There are packages for deploying installations.
Sean B