views:

437

answers:

13

What I mean by this, is that sometimes architects look to simplify and improve testability, at the expense of other important forces.

For example, I'm reviewing a very complicated application, made so by extensive use of design patterns that overly favor testing, e.g. IoC, DI, AOP, etc...
Now, typically I like these things, but this system should have been much simpler - though not just a simple web frontend for CRUD on a db, it still not MUCH more complicated than that (even considering some internal workflows, processes, etc). On the other hand, just reviewing the code becomes a major pain in the heinie, barely readable (even though its well written), and coding it must have been a pain.

The implemented complexity is a clear violator of KISS (the principle, NOT the band)... and the "only" benefit is improved testability, using testing frameworks and mocks and and...

Now, before you TDD fans jump me, I'm not belittling the importance of testability, but I'm questioning the supremacy of consideration of this specific force (against all the others).
Or did I miss something?


I'd like to add another point - it does seem to me that all this talk of "testability" is with regards specifically to unit testing, which differs from overall system testing, and can result in missed tests when the individual units are integrated together. At least, that seems the point of the IoC/DI for testing...
Also, I'd point out that this system (and others I've seen preached) only have a single concrete object per interface, and the IoC/DI is only intended for - you guessed it - replacing the concrete objects with testing mockups for testing only.


I felt the need to add this quote from Wikipedia on IoC:

Whereas the danger in procedural programming was to end with spaghetti code, the danger when using Inversion of Control is ending with macaroni code

Yup, that expresses my feeling exactly :D

+5  A: 

To answer your general question, I'd say "everything in moderation". An emphasis on testability is of course a great thing. But not when it comes at the cost of excluding, say, readable code or a logical API.

John Feminella
I believe they are doing so called TestAfterDevelopment, not TestDrivenDevelopment. Stephan Walther cleared out this nicely: http://stephenwalther.com/blog/archive/2009/04/11/tdd-tests-are-not-unit-tests.aspx Tests should IMPROVE readability not vica versa.
Arnis L.
This is my point of view, too... I like @Arnis point of test AFTER development (though unit tests shouldnt wait till completion...)
AviD
The problem with testing after development is that it's really easy to get into the habit of proving what the method does, and not necessarily what it's supposed to do. TDD endeavours to define a spec for what a method is supposed to do, and writing the method follows logically from making the tests pass (ideally).
SnOrfus
+1  A: 

Benefit of this approach will come back, IF app will grow large enough. Otherwise - it's just a waste of time. Sometimes even drag&drop 'coding' and following SmartUI pattern is satisfying enough.

Arnis L.
+6  A: 

"did I miss something?"

Yes.

The thing works, does it not?

And, more importantly, you can demonstrate that it works.

The relative degree of complexity added for testability isn't very interesting when compared with the fact that it actually works and you can demonstrate that it actually works. Further, you can make changes and demonstrate that you didn't break it.

The alternatives (may or may not work, no possibility of demonstrating if it works, can't make a change without breaking it) reduces the value of the software to zero.


Edit

"Complexity" is a slippery concept. There are objective measures of complexity. What's more important is the value created by an increase in complexity. Increasing complexity gives you testability, configurability, late binding, flexibility, and adaptability.

Also, the objective measure of complexity are usually focused on coding within a method, not larger complexity of the relationships among classes and objects. Complexity seems objective, but it isn't defined at all layers of the software architecture.

"Testability" is also slippery. There may be objective measures of testability. Mostly, however, these devolve to test coverage. And test coverage isn't a very meaningful metric. How does the possibility of a production crash vary with test coverage? It doesn't.

You can blame complexity on a focus on testability. You can blame complexity on a lot of things. If you look closely at highly testable code, you'll find that it's also highly flexible, configurable and adaptable.

Singling out "testability" as the root cause of "complexity" misses the point.

The point is that there are numerous interrelated quality factors. "It Works" is a way of summarizing the most important ones. Other, less important ones, include adaptability, flexibility, maintainability. These additional factors usually correlate with testability, and they can also be described negatively as "complexity".

S.Lott
Funny... Usually the "the thing works" argument comes when people argue *against* testing ;-)
Treb
@Treb, exactly what bothers me about this answer - "It works" is not the only judge of value of software (though its obvious it's the most important). But "It Works" can be demonstrated without the complexity of "Testability"... It just wont be as simple to *Unit test* it. Remember, the main audience of your code is NOT just the computer, but the person who reads your code next (even if it's you in 3 months).
AviD
@S.Lott, while in general, I agree with what you clarified in your edit, in this case (and in common TDD propaganda) the main (if not only) driver behind the design complexity (e.g. IoC/DI etc) was in fact testability. The architects (and documentation) were quite clear on that... So while what you say is true in principle, I'm not "blaming complexity on testability", the testability IS the direct source of this complexity. Again, I'm not arguing against complexity per se, but rather the blind devotion to testability at all costs.
AviD
@AviD: I doubt the documentation reflects the whole truth. I suspect that some manager demanded that someone "justify the investment" in TDD. I suspect that the documentation reflects a specific management bias toward justification.
S.Lott
+3  A: 

In my view, given a sufficiently large or important piece of software, adding some complexity to improve testability is worth it. Also, in my experience, the places where the complexity is difficult to understand, is when abstraction layers are added to wrap around a piece of code that is inherently untestable on it's own (like sealed framework classes). When code is written from the perspective of testability as a first principle, I've found that the code is, in fact, easy to read and no more complex than necessary.

I'm actually pretty resistant to adding complexity where I can avoid it. I've yet to move to a DI/IoC framework, for example, preferring to hand-inject dependencies only where needed for testing. On the other hand, where I have finally adopted a practice that "increases" complexity -- like mocking frameworks -- I've found that the amount of complexity is actually less than I feared and the benefit more than I imagined. Perhaps, I'll eventually find this to be true for DI/IoC frameworks as well, but I probably won't go there until I have a small enough project to experiment on without delaying it unreasonably by learning new stuff.

tvanfosson
It seems to me that mocking frameworks remove the complexity from the code, and shift it towards the tests. Which is as it should be. And while I'm all for judiciously adding complexity to receive sufficient benefits, it just seems to me that this was too much complexity, for not enough benefit.
AviD
+1  A: 

From the description it sounds like the project lost track of YANGI, developing large structures so that testing could be done if needed.

In TDD everything is justified by a test, so the fact that you have all of this IoC, DI, AOP was either required as the simplest solution to make the existing tests pass or (much more likely) an over-enginiered solution to keeping the code testable.

One mistake I have seen that leads to this kind of complexity is the desire to have the testing follow the design, rather than the other way around. What can happen is that the desire to keep to a certain hard-to-test design leads to the introduction of all kinds of workarounds to open the API rather than developing a simpler, easier to test API.

Yishai
YAGNI, indeed... But the IoC etc was not to pass tests, but rather to make the whole system (or rather, individual units thereof) testable. Over-engineered, indeed.
AviD
@AviD, you would think that the fact that the test was written and passed showed that it was testible enough ...
Yishai
+12  A: 

TDD done well can improve readability. TDD done poorly, that is without consideration of other important principles, can reduce readability.

A guy I worked with in the mid-90s would say "You can always make a system more flexible by adding a layer of indirection. You can always make a system simpler by removing a layer of indirection." Both flexibility and simplicity are important qualities of a system. The two principles can often live together in harmony, but often they work against each other. If you go too far towards one extreme or the other, you move away from the ideal that exists where these two principles are balanced.

TDD is partly about testing, partly about design. TDD done poorly can tend too much towards either flexibility or simplicity. It can push towards too much flexibility. The objects become more testable, and often simpler, but the inherent complexity of the domain problem then is pushed out of the objects into the interaction of the objects. We gained flexibility, and to the naive eye, it can look as though we've gained simplicity because our objects are simpler. The complexity, however, is still there. It's moved out of the objects, and into the object interaction, where it's harder to control. There are code smells that can act as red flags here - a system with hundreds of small objects and no larger objects is one, lots of objects with only one-line methods is another.

TDD done poorly can move in the other direction as well, that is, towards too much simplicity. So, we do TDD by writing the test first, but it has little impact on our design. We still have long methods and huge objects, and those are code smells that can red-flag this problem.

Now TDD will not by its nature knock you off-balance in either direction, provided it's well-applied. Use other practices to keep you on track. For example, draw pictures of what you're doing before you do it. Obviously, not all the time. Some things are far too simple for that. Some pictures are worth saving, some are just sketches that help us to visualize the problem, and we are, by varying degrees, mostly visual learners. If you can't draw a picture of the problem, you don't understand it.

How will this help with TDD? It will help to keep a system from going too far on the flexibility side, away from the simplicity side. If you draw a picture and it's ugly, that's a red flag. Sometimes it's necessary, but often when you draw the picture, your mind will quickly see things that can be simplified. The solution becomes more elegant and simplified, easier to maintain, and more enjoyable to work on. If you can't or won't draw pictures of your system, you're losing this opportunity to make your software more solid, more elegant, more beautiful to see and easier to maintain.

Applying this comes with experience, and some coders will never understand the value that a good balance provides. There's no metric that you can run that tells you you're in the right place. If someone gives you a prescribed method to arrive at that harmonious point, he's lying to you. More importantly, he's probably lying to himself without realizing it.

So, my answer to your question is 'yes': test everything without forgetting the other good principles.

Any good practice will throw you off-course if it's not balanced with other good practices.

Don Branson
Balance is of course important... but my issue with overly focusing on testing is LACK of balance. While I am plusoneing your answer, I don't think that the design SHOULD be driven by testing considerations, that should just be one of the factors to be weighed against the other considerations.
AviD
AviD, I agree completely. I think it's important to know where you're going before you head out. In that sense, a design is like a map. When I'm on a trip with a specific goal in mind, do I want the map before I start out, or do I want to draw it as I go? I think you and I would agree that I should at least have a rough map - I can fill in details as I go, perhaps, but I'll get better ressults in both cases if I know where I'm going first.
Don Branson
+1 For everything but the last bit: I have a problem with many many statements I've been seeing on SO lately that say something to the effect of 'applying [x] properly comes with experience.' As much as it might be true, it also (at times, not necessarily here) seems like a no-answer.
SnOrfus
@SnOrfus - interesting point. Yes, I think I've seen it as a non-answer. What I'm trying express here is more along the lines of "practice makes perfect," that is, don't give up on the idea of seeking both flexibility and simplicity after one try. Keep at it. It seems important to point this out since I've seen coders (and their managers) latch onto simple answers and then not think about going any deeper.
Don Branson
+1  A: 

For better or worse TDD has helped me break down my applications into more manageable components where my ability to test items in isolation has forced me to keep things concise. The tests have also served as a good source of documentation when I introduce others to my code. Going through the tests can be a good way to review the workings of an application in where things are isolated sufficiently so you can wrap your head around the functional parts. Another nice by product is that when you have employed a design pattern in an application, the tests have a similarity to other applications where you have used that pattern.

All that said, it would be really silly to implement let's say the Command Pattern and only have two commands when you know that the app will only ever execute two functions. Now you have saddled yourself with writing a bunch of tests. What was gained? You can always test public methods, but with a pattern in place you have complexity to deal with, and have incurred technical debt with all the additional test you have to maintain.

Another factor to take into consideration is what level architecture your team can support. Are all team members at the same level of understanding of TDD, or will there be a minority of people who can understand the tests? Will seeing a mock object make someone's eyes glaze over, and does that alone become the prohibitive factor for not completing maintenance in a timely manner?

In the end, the scope of application needs to drive the design as well. Complexity for the sake of being "pure" is not good judgment. TDD does not cause this; rather, a lack of experience can.

David Robbins
Re Command pattern with 2 commands - that's what I'm seeing with IoC with a single strategy class to be instantiated. Indeed, the only place the "default" class is replaced is for testing.
AviD
It sounds like the developer of the code had a love affair with complexity. Many times "extensibility" becomes the main goal of the application and not the implementation of the processes themselves.
David Robbins
A: 

I have no idea what you mean by it being barely readable, as, even when using AOP and DI each part should be easy to understand. Understanding the whole may be more complicated, due to these technologies, but that is more a matter of being able to explain, either with models or text, how the application works.

I am currently working on an application where there is not a single unit test, so, now I am starting to introduce DI to help make testing simpler, but, it will make it harder for the other developers to understand the system, since different concrete classes can be plugged in, and you won't know which one is until you look at the app.config file.

This could lead to them thinking the code is unreadable because they can't just flow from one function level to another easily, but have to make a side trip to see which concrete class to use.

But, in the long run this will be a more flexible and stable system, so I think it is worth the fact that some training will be involved. :)

You may just need to see about getting a better system model for the application, to see how everything is tied together.

James Black
More flexible != more stable, on the contrary this makes the whole system more brittle (more breakable parts). The issue becomes when there is only a single concrete class (!), changed only during testing, and still makes it harder to trace the code flow (or impossible, for automated tools). Overall, as you said the whole system become more complicated, even if single parts are easier... But shouldnt a good design do the opposite?
AviD
It is a matter of how testable do you want it. For example, if you have a production configuration file that is large stable and then another file for doing unit testing, then it becomes easier to know what is going on, as you can then model the production version. Automated tools do suffer due to using DI, and unless using Java on Eclipse, AOP is hard for tools it seems, but these can make a system easier to test changes, as well as to test. For me the flexibility for testing new code and comparing it to the current is worth the problems.
James Black
+3  A: 

"Or did I miss something?"

There's an implied direct relationship in the question between how testable code is and how complex code is. If that's been your experience I'd say you're doing it wrong.

Code doesn't have to be more complicated to be more testable. Refactoring code to be more testable does tend towards code being more flexible and in smaller pieces. This doesn't necessarily mean more complex (which is already a loaded term) or that there needs to be action-at-a-distance.

Not knowing the details, I can only give generic advice. Check that you're not just using pattern-of-the-week. If you have a method which requires a lot of setup or complicated ways to override its behavior often there's a series of simpler, deterministic methods inside. Extract those methods and then you can more easily unit test them.

Tests don't have to be as clean and well designed as the code its testing. Often its better to do what would normally be a nasty hack in a test rather than do a whole lot of redesign on the code. This is particularly nice for failure testing. Need to simulate a database connection failure? Briefly replace the connect() method with one that always fails. Need to know what happens when the disk fills up? Replace the file open method with one that fails. Some languages support this technique well (Ruby, Perl), others not so much. What is normally horrible style becomes a powerful testing technique which is transparent to your production code.

One thing I will definitively say is to never put code in production which is only useful for testing. Anything like if( TESTING ) { .... } is right out. It just clutters up the code.

Schwern
Other than the second-person form (it's not MY system ;-) ), I agree with the principle - it does seem like they're doing it wrong (loved that site, btw), "they" being anyone who complexifies the system for testing's sake.
AviD
A: 

I'm reviewing a very complicated application, made so by extensive use of design patterns that overly favor testing, e.g. IoC, DI, AOP, etc...

In this case testing is not the problem, its the design patterns and the overall architecture that's at fault, something commonly criticised by Joel and Jeff in discussions against Architecture Astronauts. Here, we have something that has been decided based on 'wow cool architecture', and if 1 design pattern is good, 2 must be great and 3 must be fantastic - lets see how many patterns we can create this app out of.

If testing is possibly essential to make those patterns work reliably (hmm, says something about them really), but you shouldn't confused testing being good with some architectural designs being poor.

So, no, feel free to focus on testing without worry - eg Extreme Programming is a very simple development methodology that focuses on testing, if you'd written your app in such a freeform way you might not have gotten into your mess, that you have is not the fault of test driven development, but the design choices that were made.

If you can start scrapping it, do so - maintainability is the most important factor in software, if it isn't easy to modify then you could seal it and start over as it will probably cost you more to maintain it.

gbjbaanb
As I've noted elsewhere, they system architects didn't choose to do TDD because of the complex architecture, they chose to complex architecture *in order to do TDD*. Of course, I'm not blaming testing with the "mess" - I'm blaming the overriding focus on TDD above all other considerations.
AviD
As I said, you can do TDD with any architecture - even the simplest design can be tested to destruction. They chose the architecture for the wrong reasons. In the old days we used to have test scripts for everything, and automated tests as much as we could. You don't need fancy patterns to do TDD. XP promoted it, and XP has been around since 1996!
gbjbaanb
Right, they chose this architecture because the overriding concern was *ease* of testing. (And not to nitpick, but IoC has been around since 1988, according to http://en.wikipedia.org/wiki/Inversion_of_Control#Background ;-) )
AviD
+3  A: 

I've seen first hand web sites that passed all unit test, passed all automated interface tests, passed load tests, passed just about every test, but clearly and obviously, had issues when viewed by a human.

That lead to code analysis, which discovered memory leaks, caching issues, bad code, and design flaws. How did this happen when more than one testing methodology was followed and all tests passed? None of the "units" had memory leaks or caching issues, only the system as a whole.

Personally I believe it's because everything was written and designed to pass tests, not to be elegant, simple and flexible in design. There is a lot of value in testing. But just because code passes a test, doesn't mean it's good code. It means it's "book smart" code, not "street smart" code.

Brent Baisley
Agree, though I wouldn't say that it's even book smart (which implies internal correctness, i.e. best practices). You do raise a good point, that this is actually similar to what's wrong with (some) educational systems - schools teach kids to pass tests, even if the kids come away without knowing anything.
AviD
+1: I think this highlights another part of testing which may have been deficient: Hallway Testing. If there's only 1 thing I have ever believed from Joel is Hallway Testing.
SnOrfus
+1  A: 

If the app you are writing is <10 Lines of code, then yes, adding tests increases the complexity massively. You can LOOK AT IT and test it manually and you'll probably be fine. At 100 Lines, not so much, 1,000 lines, not so much, 10,000 lines, 100,000 lines ... etc.

A second axis is change. Will this cover /ever/ change? By how much? The more the code will change, the more valuable tests will be.

So, yes, for a 150-LOC app that is an edi-format-to-edit-format conversion script that runs in batch mode that's never going to change, heavy unit-testing might be over kill.

Generally, for large apps, I've found that changing the code to be testable improves the quality of the design and the API. So if you are writing something much larger or that will be developed iteratively and think (automated) unit testing has high cost/low value, I'd take a serious look at why you believe that to be the case.

One explanation is that your boss has pattern addiction. Another might be that you see patterns and testing as a yes/no all-or-nothing discussion. A third is that the code is already written and it's the re-write to-be-testable that you are dreading. If any of those are the case, I would suggest surgical approach - focus on a few high bang-for-the-buck tests that add value very quickly. Grow your test suite slowly, as the code progresses. Refactor to patterns when you see value and simplicity - not complexity.

Matthew Heusser
+1  A: 

A testable product is one that affords the opportunity to answer questions about it. Testability, like quality, is multidimensional and subjective. When we evaluate a product as (not) testable, it's important to recognize that testability to some person may be added or unnecessary complexity to someone else.

A product that has lots of unit tests may be wonderfully testable to the programmers, but if there are no hooks for automation, the product may be hard to test for a testing toolsmith. Yet the very same product, if it has a clean workflow, an elegant user interface, and logging, may be wonderfully testable by an interactive black-box tester. A product with no unit tests whatsoever may be so cleanly and clearly written that it's highly amenable to inspection and review, which is another form of testing.

I talk about testability here. James Bach talks about it here.

---Michael B.

Michael Bolton