views:

132

answers:

5

So I'm getting used to TDD, but I've come across an unexpected problem: I'm getting really tired of 100% code coverage. The tests are getting more tedious to write than the code itself, and I'm not sure if I'm doing it right. My question is: What sort of things are you supposed to test, and what sort of things are overkill?

For example, I have a test as follows, and I'm not sure if its useful at all. What am I supposed to do so that I still follow TDD but don't get tired of writing tests?

describe 'PluginClass'

    describe '.init(id, type, channels, version, additionalInfo, functionSource, isStub)'

        it 'should return a Plugin object with correct fields'

            // Create test sets
            var testSets = new TestSets()
            var pluginData = {
                'id'                : null,
                'type'              : null,
                'channels'          : null,
                'version'           : null,
                'additionalInfo'    : null,
                'functionSource'    : null,
                'isStub'            : true
            }
            testSets.addSet({ 'pluginData' : pluginData })
            var pluginData = {
                'id'                : "testPlugin1",
                'type'              : "scanner",
                'channels'          : ['channelA', 'channelB'],
                'version'           : "1.0",
                'additionalInfo'    : {'test' : "testing"},
                'functionSource'    : "function () {alert('hi')}",
                'isStub'            : false
            }
            testSets.addSet({ 'pluginData' : pluginData })

            for (var t = 0; t < testSets.getSets().length; t ++) {
                var aTestSet = testSets.getSet(t)

                var plugin = new Plugin().init( aTestSet.pluginData.id,
                                                aTestSet.pluginData.type,
                                                aTestSet.pluginData.channels,
                                                aTestSet.pluginData.version,
                                                aTestSet.pluginData.additionalInfo,
                                                aTestSet.pluginData.functionSource,
                                                aTestSet.pluginData.isStub  )

                plugin.getID().should.eql aTestSet.pluginData.id
                plugin.getType().should.eql aTestSet.pluginData.type
                plugin.getChannels().should.eql aTestSet.pluginData.channels
                plugin.getVersion().should.eql aTestSet.pluginData.version
                plugin.getAdditionalInfo().should.eql aTestSet.pluginData.additionalInfo
                eval("fn = " + aTestSet.pluginData.functionSource)
                JSON.stringify(plugin.getFunction()).should.eql JSON.stringify(fn)
                plugin.getIsStub().should.eql aTestSet.pluginData.isStub
            }

        end

    end

end
+5  A: 

Certainly the above 'test' is overkill in many respects. It is much too long and complicated, hardly readable, and asserts way too much things. I can hardly imagine how this could have been emerged from a TDD process. It is not surprising that you get tired of stuff like this...

Test-driven development means: You should go in baby steps, where every step is a separate test, asserts only one thing, and contains absolutely no logic (i.e. no for, if/else or similar...). So the above code would result in about 4-6 separate test methods, which you would then implement one by one. First assert correct property initalization (with different values as required), then make sure that the methods work as expected, and so on...

The code coverage metric does not tell you anything about your tests except that it can show you the production code which is not touched by any tests at all. Especially it doesn't tell you if the touched code really is tested (and not only touched...). That depends only on the quality of your tests. So don't take code coverage too serious, there are many cases where a lower coverage with better tests is much more preferrable...

In sum: It is not overkill to have tests for just about everything (100% coverage), but it certainly is a problem to have tests like in your example.

I recommend you reviewing your TDD/unit testing practice, The Art Of Unit Testing book might be a good resource...

HTH!
Thomas

Thomas Weller
+1 for recommending *The Art of Unit Testing* - it's a great book that gives really good advice.
Bevan
A: 

The goal of unit testing should be to test the parts of the code that are likely to contain bugs. Achieving 100% test coverage should not be a goal, and AFAIK, TDD does not call it out as a goal.

Creating exhaustive tests for code that is unlikely to contain significant bugs is a tedious waste of time both now, and as your system evolves. (In the future, duplicative unit tests are likely to be a source of meaningless regressions in the tests themselves that just waste someone's time to find and fix.)

Finally, you and your management should always use your common sense when applying some development methodology to a project. No methodology ever invented will be a perfect fit to all problems / projects. Part of your job is to spot the situations where the methodology is not working optimally ... and if necessary adapt, or even abandon it. And in this case, the fact that the way that you/your project is using TDD is driving you nuts is a clear sign that something is not right.

Stephen C
+2  A: 

One thing that I think people forget is that with automated unit testing, it's still a coding practice. It's perfectly acceptable to set up templated / generic classes, base classes, helper classes, or any other regular software development patterns that you're familiar with to do your unit testing. IF you feel like you're doing the same thing over and over, you probably are! It's a sign that your brain is telling you: "There's a better way".

So go for it.

Robert P
very true. usually this is a sign that there's something wrong somewhere.
obelix
A: 

It seems that you're

  • testing constructors with simple member assignment. This is overkill unless there is some non-trivial logic in the constructor. Instead rely on this to get tested as part of some other tests where the members would be used.
  • comparing equality between 2 value objects. Redefine/define the equality check on the plugin type/class, by overriding the equals method. so plugin.should.eql expected_plugin should be the solitary assert
  • also consider using helper objects to construct complex test data.
Gishu
+1  A: 

Your test is trying to verify too many things at once. Split it up into several tests and refactor your test code too (yes, helper methods are allowed).

Also, I get the impression that the code under test is also doing too much. Decouple your code (refactor by using Extract Class and Extract Method etc) and test each piece of production code in isolation. You'll find that those tests will become fewer, simpler and easier to both read and write.

Mahol25