views:

120

answers:

6

Two questions about unit tests.

  1. I've been writing unit tests for a while, however they're usually to test classes I already had written. Recently I read an article (mind you an old article) that says you should write unit tests before you begin writing your code.

    Does anyone actually follow this methodology? It seems like a good idea on paper, but in practice is it?

  2. Should you write unit tests to see how your method handles bad/malicious input? Obviously you would want to write tests against functions which are specifically meant to handle "user" input to see how it handles bad/malicious input, but what about functions which should never have this type of input passed to them? At what point do you draw the line?
+8  A: 

The methodology of writing unit tests before the classes is called Test-Driven Development (TDD) and was popularized by Kent Beck in the early 2000s. The idea is that you write a test that describes the functionality that you need. Initially, this test will fail. As you write your class, the test passes. You refactor your test to add more desired functionality, then refactor the class to make this new test pass. Your class has met its goals as soon as the tests pass. Of course, this scales up beyond classes as well.

As to what types of tests to write, it depends on if you are testing a public API or a private API. Public APIs should have more extensive tests written to ensure that input is well formed, especially if you don't fully trust the users of your API. Private APIs (methods that are only called by your code) can probably get away without these tests - I would suspect that you can trust your own development team to not pass in bad data to them.

Thomas Owens
Actually, to clarify, you don't write ALL your tests ahead of time. You write ONE test. It fails. You write code to make it pass. You then modify your test, or write another. Again, it fails. Write code. It's a process dubbed "Red, Green, Refactor".
Chad
Thanks for that. I'll throw that into my answer, just for the sake of clarity.
Thomas Owens
+2  A: 

Writing the unit tests first is quite a common practice. A big benefit is that you do not just write tests that your code will pass but rather tests that define what is important, what you are trying to achieve, and what you want to make sure will not happen. It can help you flesh out your design. Also, you can vet the speck with external stakeholders before you code.

As for what tests to write, that is a bit subjective based on the time you have. I would not go crazy vetting code for scenarios it will never face. That said, it is amazing what input makes it to code that "will never see it". So, more tests are better but there are definitely diminishing returns at some point.

The language you are coding in matters. Dynamic languages require more tests because the compiler will catch fewer issues and bugs may be harder to track (since they can propagate further from the initial input issue). At least, this is my opinion.

It also makes a difference where the input is coming from. The general public should be considered positively malicious (ie. the web), employees should be assumed incompetent, and even fellow coders (and yourself!) should be assumed to be at least careless. But the danger falls as you get closer to your inner circle.

Justin
As Chad said, you do not have to write every conceivable test up front. Rather, you can start with a reasonable set of tests that define the design and the spec and then add tests if you find failures or vulnerabilities that your current tests do not catch. Basically, the test suite is in development as long as the application is in service.
Justin
+2  A: 

Test Driven Development is a pretty widespread concept. The basic idea is that you are trying to only write code that is necessary to satisfy some requirement for the software. So, you write a test for the requirement, and then the code to make the test pass.

I personally don't use TDD, but I know people who do. My personal thoughts are that it is very useful if you're working on something that is more application-driven, like a database or user interface. But, for something that is more algorithm-heavy (like a physics model), I find that it breaks my train of thought and gets in the way.

dublev
+2  A: 

Does anyone actually follow this methodology?

Yes.

It seems like a good idea on paper, but in practice is it?

Yes.

Should you write unit tests to see how your method handles bad/malicious input?

Yes.

What about functions which should never have this type of input passed to them? At what point do you draw the line?

When it moves from software to psychosis.

You can -- if you want -- write tests for impossible situations. However, you're wasting your time and your employer's in an obvious way.

You write tests for the defined use cases. And that's it.

You do not make up random test cases based on your imagination.

What If? What if the defined use cases are incomplete? Bummer. You write tests for the official, contractual, public interface -- and nothing more.

What if the design is inadequate and you realize that the given interface is riddled with incomplete specifications, contradictions and security holes? This has nothing to do with testing. This is just programming. Bad design is bad design.

What if some malicious sociopath takes your code and uses it in a way that exceeds (or otherwise fails to meet) the defined specifications? Bummer. The sociopath wins. They were able to put your code in the impossible situation you didn't test for. Buy them a beer for being so clever.

S.Lott
And after those "impossible" situations happen, you add a test to recreate it, it will fail. Then you fix your code. And in the future, you have now tested the impossible, and have a test so the impossible won't happen again.
Chad
I disagree with your "only what the specification says" approach. That may be appropriate for contract programming, but I think product development requires a more proactive approach as frequently there is no formal delineation between design and development. In most of my work if the design is bad I have no-one to blame but myself and my colleagues.
ChrisH
@ChrisH: "no formal delineation between design and development". False. Your test cases **are** your design. That's why TDD is so effective.
S.Lott
@S.Lott: "Your test cases are you design." I agree with that. That's not how it came across in your answer above, however. Your answer suggests that you shouldn't write additional tests except as driven by use cases, design, and specifications, all of which I took to be external to the unit tests themselves.
ChrisH
@ChrisH: In **most** organizations, they are external. In your case, they're not external. I'm a contractor -- worked at almost a hundred different places -- each is unique, so it's impossible to make a blanket statement. Test cases with or without *formal* design is often externally imposed.
S.Lott
A: 

There's already a quite a good bunch of answers but i'd like to say one more think to the question number 2.

If you make your unittest code "data driven", it shouldnt matter if the code is testing "bad" or "good" input. What matters is that you have big enough data set that covers both.

rasjani
+1  A: 

There is a difference in mindset between test-before and test-after. Writing tests before is a form of design, in that you are designing your code's interface and defining expected behaviour. When you then write code that passes the tests, that validates your design. And at the end of development, you happen to have a suite of tests already in place!

With test-after, you need to be careful to avoid the trap of writing tests that your existing code will pass. It's a different focus, and you don't get as much out of it as the test-before version.

Grant Palin
The trap you mention in the second paragraph is exactly why it's such a bad idea to write your tests afterwards. I'm not anal about full coverage in my unit tests, but if I'm going to write a test, I'll *always* write it before the code. Otherwise it's far too easy to fall into the trap of just reverse-engineering your methods to guarantee that the tests always pass.
kubi
@kubi Indeed. That is something that I realized once I started doing TDD.
Grant Palin