tags:

views:

325

answers:

15

I have noted over the years, that I tend to write maybe a screen full of code, then test to make sure it does what it should.

Some of the benefits of this technique are

Syntax errors are a result of the new code, so you don't have to look far to find the cause.

It is cheap to set up a temporary condition, that lets you test the else clause of an if statement, so you can be sure to get error messages, and the like correct when they are cheap to test.

How do you tend to code?
What benefits do you get by doing it that way?

EDIT: Like most of my questions, I really haven't set the context well enough. I am not really talking about unit test level granularity. I am referring to making sure the local bit of code does exactly what I intend it to, at the time of implementation.

+13  A: 

I'd like to say I always write a unit test before I write the corresponding code to pass it, but I'd be lying.

Kent Boogaart
Why? Pure semantics. The key is decide what the code should do first. Whether you right the test verify the desired behavior or the code to get the desired behavior first really makes no difference.
Tall Jeff
Writing the test first aids the design because it forces you to write code against the APIs you're building. It can totally change the way you implement the public-facing APIs.
Kent Boogaart
+2  A: 

I tend to code until I have something that should produce a well-defined observable behavior. Usually, this is a single public API function, sometimes a full class. This also encourages me to break down the problem into small functions with well-defined observable behavior. Most of my functions are smaller than a full screen. If a function is too complex to test, then it's probably badly designed from other perspectives anyhow.

dsimcha
A: 

Depends on the size/scale of the project. If its a short program (trivial to compile and run), I will test it early and often any time I add in any new functionality. This lets me catch most errors quickly.

In a large project (company-size), I'll test my piece in isolation like this, IF I can. Otherwise, pay attention to tests on those daily builds.

In short, test as often as possible, so long as the compile/run time doesn't take so long you consider taking up office swordfighting!

bigwoody
A: 

I tend to test each feature of a program. Not each function, but a series of functions that form a feature. Benefits this way that I don't have a lot of overhead to test each function, but test it after each other.

PoweRoy
A: 

you have it backwards

Steven A. Lowe
A: 

The project I am on now is supposed to be Unit Test first then development, and for the most part it is, but sometimes the person writing the test and the person implementing are not always on the same page.

So I like having a unit test for checking the main functionality of the method needed, then having the person implementing the code to write several unit tests checking the various edges of code.

"sometimes the person writing the test and the person implementing" there's your problem. in TDD the same person writes the tests and the code - they have to, because doing the tests first forces the interface design decisions; can't split that among two people and call it TDD!
Steven A. Lowe
+1  A: 

This may sound silly, but I usually test the code I write after each "processing task". Meaning, if I open a file, I test the routine. If I connect to a Database and pull out a single record, I test that routine. Or sometimes I write a test that just exercises all the methods of a class just to see if they work.

I don't think I use a hard or fast rule, but mostly when I write code to preform a task, I test to "verify" it does what it's supposed to do.

Chris
+1  A: 

Exactly as much as I have to. Sometimes that means a few hundred lines, especially if I'm adding a large system to an existing framework, when the application wouldn't even run without some part of it.

I suppose I follow the principle of testing whenever I can. Obviously that doesn't mean halfway through writing a loop, but when I'm done with the loop I'll try it out before moving on. The less you changed since the last test, the easier it'll be to figure out what was changed that caused your error condition. :)

Sean Edwards
+1  A: 

I usually do what you describe, but I don't get a full page written before I test. I've found that if I write some code then write a test, I usually have to refactor the code to make it more testable. This seems a little bit wasteful, so I'm down to just a few lines of code before I write a unit test. I find that I'm moving closer and closer to strictly adhering to TDD.

Bill the Lizard
+1  A: 

As you did not mention in which language environment you code...

As I work in Smalltalk, syntax is checked in the editor while I type, and whenever I accept a method, so thats not an issue. (For those who don't know Smalltalk: it is not file-based, but object oriented; that means that you add method-objects one-at-a-time to a class object, and the system compiles each as it is "accepted" in the editor).

For small methods which are algorithmic or which do not need a big framework/setup, I add a little comment which tests that method and which can be executed by a click. There is also a test-runner to extract all these and run them as a unit test. For bigger stuff, a TestCase class is updated for every few methods and the test-runner button clicked from time to time, stopping me on a red light.

So I would say, a test is done for every 10 lines or so. I admit, doing so requiresd a highly reactive and incremental IDE - otherwise, it cannot be done so easily and I would revert to say a roughly a letter-size page-of-code before testing. I do not consider compilability as "a test", so syntactic correctness does not count.

EDIT: For your amusement, here is a concrete example from the Collection class:
For those who don't know smalltalk:
 quoted strings are comments;
 +/- is an operator to create a measurement value;
 / creates fractions;
  {...} is array creation;
the testcases at the end are directly executable (so called doIt) from within the editor.

sum
    "sum up all elements.
     This is implemented using a variant of the normal inject:into: pattern. 
     The reason for this is that it is not known whether we are dealing with number
     (i.e. if 0 is a good initial value for the sum). 
     Consider a collection of measurement or physical objects, 0 would be the unitless 
     value and would not be appropriate to add with the unit-ed objects."

    | sum sample |

    sample := self anElement.
    sum := self inject: sample into: [:accum :each | accum + each].
    ^ sum - sample.

    "
     TestCase should: [ { } sum ] raise:Error.
     TestCase should: [ ''  sum ] raise:Error.

     TestCase assert: ( { 1 } sum = 1 ).
     TestCase assert: ( { 1. 2. 3. 4. } sum = 10 ).
     TestCase assert: ( (1 to:10) sum = 55 ).
     TestCase assert: ( 'abc' asByteArray sum = 294 ).

     TestCase assert: ( { 10 +/- 2.
                          20 +/- 4.
                         100 +/- 10 } sum = (130 +/- 16) ).

     TestCase assert: ( { (1 / 9).
                          (1 / 7).
                        } sum = (16 / 63) ).
    "
blabla999
+1  A: 

I don't use TDD, but build what are effectively test stubs first, that become the actual application.

For instance, in a WinForms app, I build the buttons first, and test them. Then when I build the class, I test that the class's methods are being called by the UI.

Then, if for instance I'm going to put the actual work into a background worker, I build that with nothing inside it, and test that the Start/Progress/Complete handlers all fire, and are handled by the class that creates the BGW.

Then I start putting the functionality into the methods, and thus already have a tested test harness. It's very rare that I have to build a separate harness for this, since every increment is small, and tested before the next level of complexity is added.

The benefit is that I don't have to hold too much complexity in mind at a time, and very little is added without the foundations it relies on already being well tested.

I've never found unit testing to be any kind of issue - what I really want is automated testing at a higher level than that.

ChrisA
A: 

The older I get, the less code I write before running/testing.

In part, that's a consequence of technical advances: I started out writing code on COBOL coding sheets to be transformed into punched cards twice a week when the punch girl came in. I generally wouldn't even attempt a compile of a new program until it was largely complete and desk-checked, which was usually a couple of thousand lines and a few weeks.

These days, when I'm on my game, I don't write any code before testing, I write a test before coding. I'm weak and not always sure how to write the test, though, so sometimes I tell myself I'm being pragmatic by just doing it. It's surprising how often that turns out to have been a bad idea, though: code that I wrote as a consequence of TDD tends to be easier to test, easier to modify and mostly just better than code that got tests later.

But that's just me, YMMV.

Mike Woodhouse
A: 

Usually, as soon as I complete a function, I compile it, switch to the REPL, and test it with some ad hoc made up data (also edge cases). Sometimes (more often than I'd like) a few debug cycles (edit-compile-test) are necessary to get the desired behaviour. Of course, this kind of development style is only viable if you can individually compile functions into a running runtime that provides a REPL, otherwise you would spend too much time waiting for a complete compile. I use SBCL with SLIME.

Svante
+1  A: 

Personally I find I tend to write the obvious Interfaces and drag in the utility resources (be they C# libraries, CSS, whatever) before I actually write tests.

I think there's a balance between zealotry and experience to be struck.

annakata
A: 

I try to make the first time my code runs be var a unit test.

Sometimes I write the test first, sometimes I write the method/class first.

I like to feel good about my self,
Therefore I like to give myself positives feedback often,
Therefore I try to “prove” a new method works soon after I wrote it.

Ian Ringrose