views:

1070

answers:

19

What kind of practices do you use to make your code more unit testing friendly?

+10  A: 

Write the tests first - that way, the tests drive your design.

Shane Fulmer
+3  A: 

I use Test-Driven Development whenever possible, so I don't have any code that cannot be unit tested. It wouldn't exist unless the unit test existed first.

John Saunders
What about UI code? And DB access layer?
Vilx-
Absolutely for the DB access layer. I don't believe that the concept of unit tests applies to UI.
John Saunders
+12  A: 

Dependency injection seems to help.

dss539
Yep, that's my favorite recommendation too -- this way, your unit tests can easily substitute mock objects for whatever dependencies you're injecting.
Alex Martelli
I think dependency injection is critical (manual is fine, you don't need to use a DI framework). Without it most of your tests are forced to be integration tests instead of unit tests.
Jamie Ide
@Jamie - yes I have found that 'manual is fine' for my uses, as well
dss539
+3  A: 

The easiest way is don't check in your code unless you check in tests with it.

I'm not a huge fan of writing the tests first. But one thing I believe very strongly in is that code must be checked in with tests. Not even an hour or so before, togther. I think the order in which they are written is less important as long as they come in together.

JaredPar
@JaredPar: the reason to write the tests before the code is to that, when you check in tests, you'll know that they test the code you're checking in. Otherwise, you're just dependent on brilliant developers to make sure the tests actually test something that matters.
John Saunders
@John - You're depending on brilliant developers in any case. There is no perfect process that will turn a complete moron into a genius. Stupidity ALWAYS finds a way to thrive within a process.
dss539
@John, I disagree. Writing the tests first only allows you to guarantee what methods are called, not what code is executed. Those are two very different issues and the latter being the much more important one. My group does not rely on brilliant developers to ensure code is properly tested, we rely on profilers to make that call.
JaredPar
@JaredPar: no, the tests are meant to fail first. The code is only written to make the tests pass. That way the tests prove that they test something that matters. @dss539: I meant that if you don't write the tests first, you depend on your brilliant developers doing code reviews or looking at their own code to ensure that the tests they check in actually test the right thing. Without the failing test first, code doesn't prove it, so have to depend on humans.
John Saunders
@John - You have a strong point about test-first giving good coverage, but a dumb/evil developer can still write bad tests. Failing first then passing a test isn't necessarily a good indicator that the test works in more than one case. However, it may help intelligent devs think about the best way to test. So I'm not saying you have a completely bad idea. Rather, I just think it isn't a panacea.
dss539
@John, that it proves the tested that specific methods are called and that there are a set of observable side effects. It says nothing about what code was actually tested though. There can be a sea of code between a method call and a side effect. Only a profiler can guarantee what code was actually tested. In that respect TDD is no better than checking in tests at the same time.
JaredPar
@JaredPar: code coverage is a separate and interesting thing. The purpose of a test isn't to test code: it's to test that the code called by the test causes the right things to happen. I don't necessarily care which code gets called to accomplish the right thing; since I write minimal code to pass the tests, at the start, there's little "extra" that would not be required for a test to pass.
John Saunders
@dss539: Also, it's obviously necessary to review test lists and tests, just like one reviews code: to a greater or lesser extent depending on schedules, staffing, level of expertise, etc. The idea would be to catch the evil developer sooner rather than later.
John Saunders
@John I fail to see how TDD ensures this any better than writing the tests after code but checking in at the same time. You're still dependent upon the developer to do the right thing.
JaredPar
@dss539: with a developer I didn't trust to be professional, I'd review the list of proposed tests, and the actual tests before accepting them. This would catch tests that don't test anything, or tests that test themselves, etc. Note you don't just write tests first, you write tests that fail. Only by creating code to make them pass do they ever pass. The process, if followed, produces the minimal code that passes the tests that were written (and which failed first, but pass now that the minimal code has been written). Maybe this is the reason for pair programming?
John Saunders
@JaredPar: first write a list of proposed tests; that list will change. Write the first test and watch it fail. Then, write the minimal code to make the test pass. You now know that the test tests the code you wrote. Refactor test and code, rerunning test, then on to the next test. No code will have been written that is not proven to be correct by the measure of "makes a failing test pass". There will be at least this measure of the quality of the tests. Without that, you need a different measure of whether the tests matter. Maybe just trust in your team. Maybe prayer.
John Saunders
@John, You're contradicting your original point. From the point of the person who didn't write the code, you are still depending on a "brilliant developer" to do the right thing. How do I as a code reviewer know you did it the right way? I think the order in which you write code / tests is largely irrelevant if they are both checked in together. Code coverage + sanity checking during code review is the only way to determine how good your tests truly are. Even then you have to put faith in the code reviewers.
JaredPar
@JaredPar: I suppose I'm depending on the developer understanding and following the process correctly. That doesn't take brilliance: with a Junior developer, it may require pair programming or supervision, but not brilliance. I depend on some process to ensure the correct procedure continues to be followed; still, no brilliance. It is the procedure that ensures that the tests being written only pass when code is written to make them pass. Thus, the tests checked in are meaningful, not what a brilliant dev decides is meaningful. He'll be right. Junior will not.
John Saunders
@John, It also doesn't require a brilliant dev to look at the public API space of their class and author tests accordingly. A Junior dev can do that just as well. I don't see how this process provides any stronger guarantees than unit tests + code coverage. I'd prefer to rely on hard numbers.
JaredPar
@JaredPar: I do not want tests authored to the public API. I want the implementation of the API to be created by making failing unit tests pass. That definitively ties the tests to the code being tested, through a practical connection (if the code didn't work, the test wouldn't pass - if the test hadn't been written, the code wouldn't have been written). By "brilliance", I meant to describe a substitute for the practical connection. Some organizations employ devs good enough to get the same or better results. Others wish they did, and had better use TDD instead.
John Saunders
+6  A: 
  1. Use TDD
  2. When writing you code, utilise dependency injection wherever possible
  3. Program to interfaces, not concrete classes, so you can substitute mock implementations.
Visage
@Visage: out of curiosity, what if you could use TDD, but didn't use dependency injection everywhere, and used concrete classes in many places, and didn't use mocks all the time; did all these bad things, but finished with 90% or more code coverage. Would that make you a bad person, or would it mean that not all of that other stuff was really necessary for quality?
John Saunders
Interesting question. IME unit testing without those things becomes very difficult, and I suppose I'd wonder if the 90% coverage was the 90% one was able to test, rather than the 90% that needed testing ;)
Visage
I't pretty much what I'm doing, and I'd say I still get most of the benefit in regard to quality - the problem is that writing the tests is harder, and maintaining them is more work.
Michael Borgwardt
@Visage: I write failing tests first. There is no code that gets written unless it was written to make failing tests pass. Therefore, the 90% coverage is, by definition, the 90% that needed testing - it would not have existed except for the tests.
John Saunders
@Michael: there's nothing wrong with implementing DI, mocks, etc. as a practical matter to get the tests done and keep them maintainable. I follow the red-green-refactor rule, so I'm constantly refactoring both tests and code; if DI will improve some tests, then DI happens.
John Saunders
@John - how big is a "unit" in your unit tests? 1 method? 1 method that calls 12 other methods? Do you have a single unit test that covers more than 100 lines of code? DI allows very small units to be tested, and DI is fairly easy to bolt on to legacy code. If you're able to easily test small units without DI, then I admire your design skills and could learn a few things from you. If, however, you're testing huge chunks of code with every test, then shame on you. :P
dss539
@dss539: I'm an old dog, and only learned this trick five years ago, so I don't do it that way (based on tests per method, etc). As in Beck, I choose a piece of functionality to be implemented, and start writing a failing test. If it's too much in one gulp, I'll start writing tests for smaller chunks. After refactoring of successful tests, they may wind up being combined into larger tests, but not until they've passed individually.
John Saunders
+6  A: 

Make sure all of your classes follow the Single Responsibility Principle. Single responsibility means that each class should have one and only one responsibility. That makes unit testing much easier.

Robert Cartaino
+3  A: 

Small, highly cohesive methods. I learn it the hard way. Imagine you have a public method that handles authentication. Maybe you did TDD, but if the method is big, it will be hard to debug. Instead, if that #authenticate method does stuff in a more pseudo-codish kind of way, calling other small methods (maybe protected), when a bug shows up, it's easy to write new tests for those small methods and find the faulty one.

Maximiliano Guzman
+22  A: 
  • TDD -- write the tests first, forces you to think about testability and helps write the code that is actually needed, not what you think you may need

  • Refactoring to interfaces -- makes mocking easier

  • Public methods virtual if not using interfaces -- makes mocking easier

  • Dependency injection -- makes mocking easier

  • Smaller, more targeted methods -- tests are more focused, easier to write

  • Avoidance of static classes

  • Avoid singletons, except where necessary

  • Avoid sealed classes

tvanfosson
yeah that about covers it
Epaga
+3  A: 

And something that you learn the first thing in OOP, but so many seems to forget: Code Against Interfaces, Not Implementations.

alexn
I guess it depends on when you learned. That's nowhere near the top of my list of OO practices.
John Saunders
It depends on what he means by "Interface". If he means "use the C# `interface` keyword a lot" then no, that's not so useful. If he means "treat other objects as black boxes that accept a predefined set of messages" then yes that **is** useful
dss539
@dss539: I'd agree, but the "interface vs. implementation" dichotomy suggests ISomethingOrOther.
John Saunders
+2  A: 
1.Using a framework/pattern like MVC to separate your UI from you
business logic will help a lot. 
2. Use dependency injection so you can create mock test objects.
3. Use interfaces.
Jeffrey Hines
I agree on separating UI from logic.
CiscoIPPhone
+3  A: 

Spend some time refactoring untestable code to make it testable. Write the tests and get 95% coverage. Doing that taught me all I need to know about writing testable code. I'm not opposed to TDD, but learning the specifics of what makes code testable or untestable helps you to think about testability at design time.

Robert
A: 

To prepare your code to be testable:

  • Document your assumptions and exclusions.
  • Avoid large complex classes that do more than one thing - keep the single responsibility principle in mind.
  • When possible, use interfaces to decouple interactions and allow mock objects to be injected.http://www.msnbc.msn.com/id/31381129/ns/world_news-the_new_york_times/
  • When possible, make pubic method virtual to allow mock objects to emulate them.
  • When possible, use composition rather than inheritance in your designs - this also encourages (and supports) encapsulation of behaviors into interfaces.
  • When possible, use dependency injection libraries (or DI practices) to provide instances with their external dependencies.

To get the most out of your unit tests, consider the following:

  • Educate yourself and your development team about the capabilities of the unit testing framework, mocking libraries, and testing tools you intend to use. Understanding what they can and cannot do will be essential when you actually begin writing your tests.
  • Plan out your tests before you begin writing them. Identify the edge cases, constraints, preconditions, postconditions, and exclusions that you want to include in your tests.
  • Fix broken tests as near to when you discover them as possible. Tests help you uncover defects and potential problems in your code. If your tests are broken, you open the door to having to fix more things later.
  • If you follow a code review process in your team, code review your unit tests as well. Unit tests are as much a part of your system as any other code - reviews help to identify weaknesses in the tests just as they would for system code.
LBushkin
wtf @ http://www.msnbc.msn.com/id/31381129/ns/world_news-the_new_york_times/
dss539
@dss539: is that what you meant to post?
John Saunders
I like the Iran link. Still not upvoting, though.
Robert
+2  A: 

Don't write untestable code

cwash
+ 1 for this. The content is also available in text form: http://misko.hevery.com/code-reviewers-guide/
jens
+4  A: 

When writing tests (as with any other software task) Don't Repeat Yourself (DRY principle). If you have test data that is useful for more then one test then put it someplace where both tests can use it. Don't copy the code into both tests. I know this seems obvious but I see it happen all the time.

ssteidl
+1 most developers think "Oh it's just test code, who cares if it's shitty?" which is sad because the code they normally write is already shitty enough!
dss539
+5  A: 

I'm sure I'll be down voted for this, but I'm going to voice the opinion anyway :)

While many of the suggestions here have been good, I think it needs to be tempered a bit. The goal is to write more robust software that is changeable and maintainable.

The goal is not to have code that is unit testable. There's a lot of effort put into making code more "testable" despite the fact that testable code is not the goal. It sounds really nice and I'm sure it gives people the warm fuzzies, but the truth is all of those techniques, frameworks, tests, etc, come at a cost.

They cost time in training, maintenance, productivity overhead, etc. Sometimes it's worth it, sometimes it isn't, but you should never put the blinders on and charge ahead with making your code more "testable".

Fred
+1 for common sense, -1 because by the very nature of being testable, code becomes more changeable and maintainable
dss539
+1 for some sanity! Unit tests are great, but TDD is a big sap on getting things done quickly if tests are used to cover *everything*. If you have tests for trivial or boilerplate code like getters/setters, it's more likely that you'll get failing tests due to interface changes rather than actual bugs. Smart unit testing is good. Zealous unit testing leads to brittle code and slow development.
Jacob
If you are writing a ton of boilerplate code, maybe it's time to find a new language.
dss539
+1 for common sense. Always use your brain. But I don't write unit tests so that I get warm fuzzies and can write complicated code to make me feel smart, I do it because it makes my code more changeable and maintainable and I know that it works. If I didn't have to make my code work, I could get things done a lot faster!
Jon Kruger
A: 

No Statics - you can't mock out statics.

Also google has a tool that will measure the testability of your code...

Michael Wiles
You can inject dependencies into your statics, though. ;)
dss539
You can using something like Groovy or JMockit. http://groovy.codehaus.org/Mocking+Static+Methods+using+GroovyIf you have to. Sometimes you don't have control over the design or legacy code.
cwash
+1  A: 

I'm continually trying to find a process where unit testing is less of a chore and something that I actually WANT to do. In my experience, a pretty big factor is your tools. I do a lot of ActionScript work and sadly, the tools are somewhat limited, such as no IDE integration and lack of more advanced mocking frameworks (but good things are a-coming, so no complaints here!). I've done test driven development before with more mature testing frameworks and it was definately a more pleasurable experience, but still felt like somewhat of a chore.

Recently however I started writing code in a different manner. I used to start with writing the test, watching them fail, writing code to make the test succeed, rinse and repeat and all that.

Now however, I start with writing interfaces, almost no matter what I'm going to do. At first I of course try to identify the problem and think of a solution. Then I start writing the interfaces to get a sort of abstract feel for the code and the communication. At that point, I usually realize that I haven't really figured out a proper solution to the problem at all as a result of me not fully understanding the problem. So I go back, revise the solution and revise my interfaces. When I feel that the interfaces reflect my solution, I actually start with writing the implementation, not the tests. When I have something implemented (draft implementationd, usually baby steps), I start testing it. I keep going back between testing and implementing, a few steps forward at a time. Since I have interfaces for everything, it's incredibly easy to inject mocks.

I find working like this, with classes having very little knowledge of other implementation and only talking to interfaces, is extremely liberating. It frees me from thinking about the implementation of another class and I can focus on the current unit. All I need to know is the contract that the interface provides.

But yeah, I'm still trying to work out a process that works super-fantastically-awesomely-well every time.

Oh, I also wanted to add that I don't write tests for everything. Vanilla properties that don't do much but get/set variables are useless to test. They are garuanteed by the language contract to work. If they don't I have way worse problems than my units not being testable.

macke
+1  A: 

Check up this talk Automated Testing Patterns and Smells. One of the main take aways for me, was to make sure that the UnitTest code is in high quality. If the code is well documented and well written, everyone will be motivated to keep this up.

Nir Ofry
A: 

You don't necessarily need to "make your code more unit testing friendly".

Instead, a mocking toolkit can be used to make testability concerns go away. One such toolkit is JMockit.

Rogerio