tags:

views:

418

answers:

4

At times I find a very brittle test to be a good thing because when I change the intent of the code under test I want to make sure my unit test breaks so that I'm forced to refactor ... is this approach not recommended when building a large suite of regression tests?

+4  A: 

IMO, as long as your tests make sure that your app code does what it should do, and if changed, the tests fail, then your tests are fine. Could you define what exactly you mean by "brittle"?

Just make sure that your tests really cover every aspect of your app code. (Within reason).

Mark
+6  A: 

The general statement admonishing brittle unit tests applies mostly to shops which haven't fully embraced unit testing. For instance, when trying to convert from having no tests to having a full suite of unit tests, or when your project is the unit testing pilot project. In these cases developers get used to false positives from unit tests and begin to ignore them. Then the unit tests fall behind the production code and either get left behind or require a major effort to update.

I would say you should always aim for the least brittle tests you can that fully test your function/module, but if you have 1 or 2 that are brittle you should be okay in most cases.

Bryan Anderson
+9  A: 

Unit tests must be brittle -- it must be easy to break them. If they don't break, then they're not unit tests at all; they're code comments.

...

or am I missing the point of the question?


Edit: I should clarify my earlier answer.

I was being a bit pedantic about the language. "brittle" just means "easy to break". A unit test should be easy to break. The term "brittle tests" should really be "overly-brittle tests"; tests that break when they shouldn't. Even so, it's much, much easier to fix an over-brittle test than to fix a bug that slipped through an under-brittle test, so go ahead and write your brittle tests!

dysfunctor
+4  A: 

As dysfunctor points out, unit tests should be brittle in that they are easy to break. However, I would add that they should not be brittle in that they pass or fail randomly.

This happens a lot in tests that involve threads and sockets. Tests should make use of mutexes and other "wait" devices to avoid the tests failing under uncontrollable circumstances, such as high processor load.

A definite "smell" of a randomly-brittle test is the use of a sleep() function in a test.

metao
I would argue that your tests should be simulating the socket or thread work through the use of mocks. Testing actual socket communication is an integration test. A Unit Test should test that the implementation worked as expected with the assumption that the socket communication worked.You say that use of "sleep" is a sign of smell but before that you say to use other "wait" devices, so which is it? I think any use of "wait" devices is the sign that this test should really be an integration test and the behavior should be split out to a unit test with mocks. My 2c.
Foovanadil
By wait devices, I mean any non-polling mechanism - callbacks, or a wait mutex, ideally. Or select() ;) I agree in general that mocks should definitely be used whereever possible.
metao