views:

215

answers:

6

I've been a practitioner of test driven development for several years, and overall i'm happy with it. The one part that I don't yet understand is the idea that you should always be unit testing the 'smallest possible unit'.

Part of the idea of unit testing seems to be to allow you to refactor with confidence that you won't break anything. However, I find that tests which test very small pieces of code will almost never survive these refactorings, the code always changes significantly enough that small unit tests just get thrown away and new tests are written. It is the tests that cover larger pieces of functionality that seem to give the most value here, since the higher level interfaces don't change as often.

And for trivial refactorings, like moving methods around, those are just done via an IDE, and since i'm using a staticly typed language, I've never run into a situation where the IDE isn't able to do the refactoring perfectly.

Anyone else have similar or opposite experiences?

+2  A: 

It's a granularity issue, like Goldilocks and the three bears. You want something that is not too small, not too large, but just right.

If the granularity is too small, then you may have found it is a waste of time. If it is too large, then it may miss important constraints that should remain constant over a refactoring/reconfiguration etc.

Like any "best practice" these ideas are often developed in theory but requires some common sense and tailoring to your particular situation to be useful for you.

Larry Watanabe
+9  A: 

I've found the same thing - but one thing I think is important to differentiate is between private units of code, and publically accessible units of code. I do think that it is important to always unit test the "smallest possible, usable unit of code exposed in the public API".

The public API should not change during refactorings (since it breaks binary compatibility and versioning), so this issue does exist.

As for the private API, there's a balance here. The smaller you test, the more strongly you can rely on your tests. The higher level your tests become, the more flexible the tests are, and the more likely they are to survive a refactoring.

That being said, I believe both are important. A large scale refactoring will always require reworking tests - that's just part of testing in general.

Reed Copsey
+2  A: 

Seems to me that the smaller a unit of code is tested, the more information you get from test failures. If you have a higher-level test that covers a larger piece of code, then a failure will tell you less about where the problem is.

jbourque
So what? Test failures are a relatively rare event - I can't see the time invested in maintaining microtests really paying off.
Michael Borgwardt
Fair enough. I mean, there's certainly a point of diminishing returns.
jbourque
+1  A: 

Most of the times I only unit test public classes and methods. Because I think that, as you said, private members are too volatile and subject to changes.

A modification on private and internal members indicate that you change inner algorithm, whereas a modification on public members indicate a semantic modification. If I think that changing a private member change the semantic of my class, then, maybe this member shouldn't be private after all.

A bug introduced during the refactoring of the inner algorithm of your class breaks 90% of the time tests at the semantic level, and most of the time, if you test often and early, the bug is found quickly.

Nicolas Dorier
+1  A: 

It doesn't sound like you are doing true Test-Driven Development, which requires an iterative cycle of writing a test for a small piece of functionality, creating the functionality to satisfy the test, and then refactoring to remove any duplication the test/code may have added. It sounds like you are testing after the fact ("the code always changes significantly enough that small unit tests just get thrown away"). If a test is a specification for functionality (as it is in TDD), refactoring would never cause a test to "not survive".

So, assuming you are not really doing TDD, you are struggling with the trade-off of how much test code to write versus how much time to spend developing production code. I would say, write enough test code so you know your code does what it is supposed to do. If you can do that with more coarse-grained tests, that's fine, though as others have said, that makes it more difficult to know what causes a failure.

Testing is not just for refactoring. Its to know when you are done. Its so you can add new functionality with confidence you won't break the old. Its so after you are long gone someone else can come in and understand your code, change it, and be confident it works.

I do recommend you follow the TDD practice as described by Kent Beck. Writing tests after the fact is better than no tests, but I find is a much less productive practice than TDD.

SingleShot
+1  A: 

I've been following more of a BDD approach, where i end up not testing functionality as much as outcomes. When doing so, you still are testing functionality but as measured by the expected outcomes. I find doing this makes your test more meaningful, less brittle, more applicable, and i end up writing less of them.

Josh