tags:

views:

128

answers:

9

I was browsing StackOverflow when I encountered this question. Here the author mentions his/her style of debugging:

I am wondering how to do debugging. At present the steps I follow is,

  • I complete a large script,
  • Comment everything but the portion I want to check
  • Execute the script

and in one of the answers another user says that the asker is debugging the wrong way:

Your sequence seems entirely backwards to me. Here's how I do it:

  1. I write a test for the functionality I want.
  2. I start writing the script, executing bits and verifying test results.
  3. I review what I'd done to document and publish.

I'm fairly new to programming, and I follow the first way of doing things. It seems that the second way is called Test-driven development and it seems to be a very inefficient way of doing things.

Can you explain TDD and its merits in a simpler way?

+1  A: 

I can't explain it in the proper terms, but I can tell you that, in general, TDD does allow the developer to focus on the ACTUAL requirements of the task. It amounts to another method for forcing the requirements to be very specific and detailed. Once the test is fully fleshed out, then any code which completes the test successfully is done. Period. End of development for that task.

Is that necessarily more or less efficient than doing it the other way? I don't think so, in a perfect world. But most of us do not live in a perfect world. So in my opinion, if the only person who can represent the requirement in some fashion with 100% accuracy is the test-creator, then it works fine. Programmers do not spend any time guessing what to do, they don't write for conditions that cannot occur, etc.

I am a proponent of TDD, but I have to admit that I have never been in an organization that did it.

MJB
But what happens when the requirements change? We all know that requirements is not something that's set in stone, so do we have to write tests all over again? (I'm not sure whether my comment is meaningful since I know very less about TDD)
instantsetsuna
You're right -- they change. But TDD also mandates that you have automated testing to a certain extent, and that means that when the requirements change you change only the tests that were related to the change. All the others remain, (as many have pointed out, having a suite of unit tests is very beneficial).
MJB
"I don't think so, in a perfect world." What is this "perfect world"? In a perfect world Programming is hard, Testing is hard. Writing code first -- without any tests -- is a waste of time in any world. Can you clarify what this mysterious "perfect world" is? I don't get what "perfect world" you're talking about.
S.Lott
@S.Lott : what I was trying to say was that TDD is not necessarily more efficient in a world where requirements are 100% complete and accurate and clear, and programmers understand requirements 100% right the first time around.
MJB
@MJB: "requirements are 100% complete"? Not a "perfect" world. A "Fantasy" world. I'm not even sure what "100%" complete requirements would even mean. If the requirements were so complete you could simply write code, then they'd be a detailed design -- to the coding level -- in the target language. Isomorphic to code; hence not a "requirement". I think you mean "Fantasy" world or something.
S.Lott
@S.Lott: OK, fantasy world if you prefer. I don't understand the amount of anger you seem to have about that one phrase, especially because I am in favor of TDD, and that seems to be your position as well. Did I not word it the way you demanded? Shall I apologize for not reading your mind? What is your goal with the comments, because I don't get it yet.
MJB
@MJB: Anger? It's a question. I was asking a question, trying to understand precisely what you meant. You can perceive a question as anger if you'd like to, but I'm trying to understand your use of the phrase "perfect world". Many people use this phrase. I am trying to understand it by asking and positing a synonym. I'm sorry you feel this is "anger". I can't figure that out either. Please explain.
S.Lott
@S.Lott: Well, if I misinterpreted your comments then bad on me. But now we are off topic and this seems to be getting us nowhere.
MJB
@MJB: The point of comments is to clarify. I was trying to understand your use of "perfect world". For some reason you brought up "anger", which I still don't understand. While "anger" is way off topic, "perfect" is not.
S.Lott
A: 

TDD also helps you get awesome test coverage and kinda forces you to build well structured, separated code => Good testability.

If you build a system 100% TDD you will have tests on everything meaning your system will be alot safer to changes and the possibility of implementing bugs in the future will be way smaller.

Takes longer to develop, but in the long run provides a better result.

Einarsson
+1  A: 

There are myriads of articles on TDD on the web. I've been using TDD for only one year now, so I am no expert, but here's my short summary of why I like TDD:

  • Focus on your goals first instead of the implementation
  • It makes you implement the smallest piece of code required to fulfill the task and nothing more. You can refactor this into something more beatuful later, but in general you tend to follow the YAGNI principle more closely.
  • The most important: You can refactor your code fearlessly knowing that you have not broken anything during the last refactoring (This assumes that you have a good test coverage, but that's part of using TDD the way it's meant to). This makes you much more agile.
  • Frequent Deployment to a production environment also gets less dangerous if you have a good test coverage.

While TDD takes some time to get used to it, you will see that it is actually not less efficient than traditional coding. Quite contrary, because you tend to lose much less time debugging your code in the long run.

That being said, I am no TDD hardliner. There are some situations where I prefer writing unit tests after writing the implementation. It's easier than TDD when I am working with a new technology and do not know what the desired result looks like until I have implemented it (e.g. a ViewResult object in ASP.NET MVC).

Adrian Grigore
+1 "the smallest piece of code required to fulfill the task and nothing more" That's the efficiency.
S.Lott
A: 

You should write unit tests:

  • Unit tests pay for themselves, as they reduce the number of bugs.
  • The code should be tested as-is, the tests should be very easy to run now and in the future, so the comment-everything approach to testing is out.
  • You should test little bits of functionality, to prevent that things become too complicated to test.

Whether you do this before or after writing the code is not that important. Some people say that if you write the test after the implementation, you change your test to fit the implementation, which reduces the effectivity of the unit test.

Sjoerd
+1  A: 

TDD means letting your test drive your development process. It might seem inefficient at first, but I'll get to that in a second.

Basically the process is the same as what the person above described:

  1. Write a test - test the bit of functionality you want (but doesn't exist yet)
  2. Write the littlest bit of code - fill in enough to make the test you just wrote pass
  3. Repeat - go back to step 1 until you have all the functionality you want

Often times this means writing a test that refers to an object, then writing the definition for that object. Then writing a test that uses a method on that object, and then writing the method itself. You work in very short intervals, going back and forth from testing to coding (short = seconds upward to about a minute on either side).

The philosophy of TDD is that you want 100% (ideally) test coverage. Logically, if you want to write tests, you have the option to write them before or after you actually code the functionality. The advantages of doing it before you code are:

  • You actually test your tests, since they should fail (when you write them) then pass (when you write your code)
  • You guarantee that you have a LOT of test coverage at all times
  • When you go to refactor you know you have lots of test coverage to ensure you don't do something stupid (this is the biggest advantage I see, since lots of times you're working on old code that you've forgotten all the subtleties to)

Some people say that it helps them clearly define what they're about to do, so they can actually fill in the code faster. Even if it takes longer, it does seem advantageous to build in the test coverage when you go back to that code and refactor/change/optimize it later. And, whether you decide to test before or after coding, I think we all can agree that testing to some extent (please no unit testing religious debates here) is important.

Jon Smock
+2  A: 

In three words, save your tests.

A lot of good programmers test as they go, but they don't save their tests. They just write little bits of code to test their code as they write it. Go from "test" as a verb (e.g. I tested the code) to "test" as a noun (e.g. Here is a test for the code.)

There's more to TDD than this, but it's a good start.

John D. Cook
+1  A: 

You have two things intermixed here. Test Driven Development and Unit Testing. You don't have to do them both - writing integration tests first is called ATDD or BDD and writing unit tests after the code is simply writing unit tests - but they work very well together.

Unit testing is all about testing small sections of code (normally a single method, but a unit is a flexible beast) and testing it in isolation. Taking our initial two methods above, this would change it to:

  • I complete a [section of a] large script,
  • I write a test that only hits the area I think is problematic
  • Execute the tests

This means that you are not hacking away at your script, and thus don't risk forgetting to uncomment something. You also have a test that you can run again. This means you can be reasonably sure that a new change does not re-enable this bug. And that is a big deal; a test that is only ran once is useless. Tests should be repeatable and that is something hacking out sections of your code does not give you.

Test driven development is all about writing the tests before you write the code. This does add a large overhead, but when do you think it is cheapest to fix a bug? As you are writing that section of code or months later? This is the main reason for me using TDD. I get feedback on my code as I write it. Much less sending it to the testing team, waiting a week for them to run a set of tests against it and then having to be pulled of a different user story to try to work out what the hell I was thinking a week ago.

They also mean that you can refactor and be reasonably sure you not just stuffed everything up, help to make you think about the design of your code, act as documentation and various other bits of loveliness. But for me the killer feature of TDD is immediate feedback.

mlk
+1  A: 

I follow the first way of doing things.

So do many others

... the second way ... seems to be a very inefficient way of doing things.

All new and different things seem inefficient. Get used to the feeling that everyone else is wrong and you're right.

The rest of your career will be filled with this feeling. Every new things seems inefficient. And will always seem inefficient.

Can you explain TDD and its merits in a simpler way?

Yes.

TDD is more efficient because it's more efficient.

You must do the testing. You can write tests first or last. Either way, you have to write them.

You can "Comment everything but the portion I want to check" and try to locate bugs in that in a slow, ineffective way. It's often ineffective because -- without tests to drive your development -- you may write code that's useless or a waste of time.

Or you can write a test and write the least code that passes the test in an efficient way.

S.Lott
+2  A: 

This is what seems efficient to you:

  • I complete a large script,
  • Comment everything but the portion I want to check
  • Execute the script

It's not. That "portion you want to check" might not work properly if you haven't set up things beforehand. Figuring out what to comment, and what to leave uncommented, before running your test (just once!) is hard to get right and takes a lot of your time - and can leave you with false positive results (you think your code is working, but it ain't). Correctly de-commenting the code for production introduces another risk of error. And you have to do this every time you think you want to check your code - and you may well fail to check code that has an error in it, thinking it's too simple to go wrong.

With TDD - with test-driven design and unit tests - we avoid all of these pitfalls. We're writing tiny tests, so they don't take much time to write, and they can test only tiny functionality. That drives most of us to write tiny functions and small classes; that's (part of) good design; it promotes reuse and reduces coupling. And, as a side effect - and I can't emphasize enough that it is a side effect, not the primary benefit - we get comprehensive test coverage. Our tests are small and fast and can be run hundreds of times a day, so they let us know the moment we've screwed something up. And because they're tiny and focused, they don't just tell us that we've screwed up; they tell us what we've screwed up.

Good design: tiny methods, tiny classes, minimal coupling; only the code we needed to meet our actual needs; comprehensive automated unit test coverage. That's efficient.

Carl Manaster