Give short concrete answer: why unit testing did NOT work out for you (your project). Will you particularly try again on a different project?
Because my project was big and already started, and it was hard to build up the base of unit tests to make the effort really worthwhile.
Because the architecture of ASP.NET WebForms/[your platform] makes applications difficult to test.
Because not everyone on the team understood the testing strategy/knew how unit tests are supposed to work.
It didn't work out for me on my project because I didn't use it. I didn't use it because I didn't understand it.
I picked up a book on it, and won't make the same mistake twice.
Undisciplined developers who stubbornly refuse to change working habits can make this not work.
Missing separation of concerns. GUI intermingled with business logic code.
All on a project that was already started
Unit testing has failed for me in the past (or rather, I failed at unit testing) because I have not thought far enough ahead and not tied down the design early enough. By the time the system was well enough specified for unit testing to be beneficial, it was already too much work for the nature of the project to implement it.
THINK AHEAD! - that was my mistake.
Reluctant colleagues - and the lack of persuasive ability to talk them round.
A particularly annoying area of development that is really badly suited to unit testing is the growing trend for frameworks. Unless the framework - eg a MVC solution - comes with mocks, it is ludicrously difficult to unit test code developed to run on to of that framework. I generally just give up on unit testing in such circumstances.
Started writing tests after the code was written and got frustrated because I felt like I was wasting my time. The solution was to switch to test-driven development where my unit tests drive my design. Now each test is progress instead of waste. I'll never go back to developing tests afterwards.
EDIT Now, even on projects that are fixes or upgrades, I still do TDD. I won't go back and put in tests for existing code, necessarily, but for new code or bug fixes I'll write at least enough tests to ensure that my changes don't break anything. Tests are written first to verify/protect the existing functionality, then to introduce new (fixed) behavior.
I employed unit testing in a couple of projects I worked on (web applications, using business objects + stored procedures to perform CRUD operations on SQL databases).
Heavy use of SQL prevented us of creating really fine-grained tests. To properly test some database-interaction method you need data in the database (that is, if you're intending to automate tests). And data can only be added via another method... so you're testing both.
I ended up either writing not-so-grained tests that verified all inter-related methods - or living with the fact that a single error brought down multiple tests (for those of you thinking about mocking: part of the bugs found during unit test were in SQL, so mocking was excluded on purpose and I'm glad we did it).
Unreasonable deadlines that barely allow enough time to write the production code. I have used unit testing successfully on previous projects and will continue to do so whenever there is enough time. I guess it's a matter of helping management and the customers to see the green bars at the end of the tunnel.
Ego, most team members felt their code is superior enough to not require Unit Tests, at the end of the day it's a cultural change.
Tightly coupled code made it impossible to test one unit without testing the entire system - which in turn depended on other systems.
Unit testing did work for me and made my life easier on my last project.
But because my object model has crud functionality built right into the objects, testing against a database was an unbelievable PITA! Next time, the crud's going to be separate, and I've got to look into Mocks.
it is very hard to write unit testing for low level hardware development.
How do you unit tests your interrupt handler without writing full dma logic that will trigger that interrupt ?
Actually we are doing unit testing just the units are quit big :)
It didn't work for me in a project where we had shared code ownership (usually a good thing!) and I was the only one writing and executing unit tests.
I tried it in an existing project with highly coupled classes and not much cohesion. Unit testing only worked for me here in new pieces of code.
The first project I tried to do unit testing things went sour because I didn't use TDD. I added all unit tests later and lots of times didn't have time left to add the tests or had to rewrite lots of code to be able to test small portions of it. Unit testing cost more time here than it saved although the few tests we had did add to the quality of the project.
In short you have to have support from coworkers. You have to have a codebase that supports unit testing (or you can add it in by refactoring the old code-base piece by piece) and you have to do work test first at least when figuring out api's and design. It also helps to understand that unit-testing is more of a design activity than a testing activity.
I've never used it. I've never felt I needed it, and it looks like far more of a PITA than just dealing with the problem it's supposed to solve. I refuse to take the time to learn and evaluate it in detail because of this instinct - I'm prioritizing things that I think will be useful.
Now, before you advocates explode, please feel free to explain to me why I'm wrong in the comments to this answer (as politely as you can manage, please - my position is not set in stone, and I'm just trying to be bluntly honest above). Give me a reason to love the object of your affections, and not an "and the advocates are all code-religious idiots" addition to my list :)
If you're adding unit tests to an old "working" system, you'll face these issues:
To make the system testable, you'll have to change things that already "work". Some of your changes will break the system and if the team hasn't bought the testing concept, yet, they'll blame it on the "testing hype".
Your tests will show existing issues in the system which were gleefully ignored in the past. Nobody likes it when you find bodies in their backyard and guess who will be blamed? The long standing member of the team (who wrote the bug) or the new guy?
In all projects, time is scarce. It's by design: If there was spare time, your boss would find you something productive to do. If developers don't get the feeling that the tests help them to meet their goals (by reducing unnecessary bug hunting and by greatly improving self-confidence and code quality), they will quickly stop doing this "futile crap".
Lots of unit tests were written, but they were not maintained vigorously enough.
All unit tests passed at the time when they were written and checked in.
The unit tests were run frequently, but when some of them started failing (often because the code changed without updating the corresponding unit test) we did not fix the unit tests quickly enough.
The situation was allowed to deteriorate and now we have so many unit tests failing that they have become almost meaningless.
Running a unit test suite before a check in to make sure I am not about to break anything is useless because lots of tests are failing with and without my changes, so it is impossible to see which failures were introduced by my changes.
We confused Unit tests and Functional tests. Half are functional, half are real unit tests. Thus, our unit tests take WAY too long to run (over an hour). This makes our tests much less effective.
One of the systems I work with is a COTS (Commercial Off The Shelf) package with an extensive DSL (Domain Specific Language) used for automation (think VBA, and you won't be far off).
The vendor provides no support for unit testing, and the language has no inheritance, no reflection and no variables with local scope. It does allow for file inclusion, but each program (at least it supports multiple programs) can only have one "start" method in it.
There are only really three ways to implement any kind of unit testing:
- Build a simulator emulating the core software.
- Manually construct scripts that test each important function, accessed through the file include mechanism, and build some way of reconciling results
- Build a code generation tool using comment-based annotation processing to automatically create scripts to run any annotated test methods and reconcile results.
In all cases, once we created our framework to make testing possible, we'd then need to start creating tests - and figuring out a way to test anything, without trampling over live data or having dependencies on other tests, will be extremely, how shall we say, interesting.
All three methods require an enormous investment of effort, and we have so far been unable to justify making the large investment of time.
Besides, this system will get turned off 'real soon now'!
Unit testing did not work out for me on one project because the other developers started to act as if the test suite was the real application deserving of their attention and love. They spent most of their time on the tests, and the actual application slipped well past the deadlines. As a note of caution re: unit tests, when a customer is literally yelling at you because an application is not done, it is inadvisable for a junior developer to blurt out "but we've written 5 times as much test code as actual code". Whether you use unit tests or not, you have to admit that it can be a tough sell to the customer.
Also, the unit testing in this case was largely a failure because the tests were mostly aimed at aspects of the application that were never going to change, and ignored the weakest points. In this case, the weak points were the availability of network resources and the state of the data in the production environment. All of this was mocked in the test suite, and essentially useless. This incredibly heavily unit-tested application failed miserably in the field.
When I first tried it way back when, when NUnit was pretty new, it was difficult to handle testing of UI elements in ASP.NET and to mock up 3rd party tools or applicatios (Outlook, etc.) that the app needed to talk to. I suspect now with better mocking tools and UI testing that it would be easier, but at the same time, going through the effort of setting up such mocking frameworks does take some additional time.
I'm not saying that it's not worth it, the last project I was on where we used them, we had them all hooked up to CruiseControl.NET and it was awesome to see all the builds and tests run on each checkin....
TDD has never made sense to me because:
- Who tests all this test code you write?
- Vast majority of defects are UI related and won't be found by unit tests
- Rest of errors are due to missing functionality that if you forgot to write you would also forget to test (eg: oops, no validation or oops sql injection)
- Every change now requires changing the real code and some test code - double work
- Impossible to unit test a complex system without leaving a certain amount of artifacts in the production code to support testing
- Provide a false sense of security to many people because they think the codebase can't be messed up because there are tests.
- I think TDD grew up with dynamic languages and is the replacement for not having compile time checking
- Ive never had a unit test catch anything but the most simplistic error that would have been caught anyways if the developer had bothered to even run the code.
1) Legacy code. 2) Strong dependency on a framework (WebForms for example) that don't enable easy testing - too much work and workarounds that clutter the code just to gain testability.
Unit testing did not work for me (a few years ago) because the code I wrote was not designed to be tested. Lesson learned: take the time to hone your code like a fine piece of woodwork.
Unit testing isn't a silver bullet.
Our product at work is a 40 kloc (Python) application with ~120 kloc of tests, and the full test suite (including functional tests that remote control the GUI) take hours to run (distributed on several integration machines).
We have some testing problems:
They take too long to run - we often just run the unit tests to check in, which occasionally leads to broken integration builds and backing out commits.
Some functional tests fail spuriously. Sometimes this is due to weird screenshot failures on Vista, sometimes tests are sensitive to timing differences. When we see these we add retries or try to make the interactions more deterministic, but chasing these false alarms can cost a bit.
Some unit tests are written in a very 'mock-heavy' style - they know too much about the implementation, so that when it changes you have to change the tests too, which can be a real pain. It's tricky to find a balance between mocking out too much stuff on one hand and having tests that test too many layers on the other, but we're still learning.
However, they're definitely a net win. When we fix defects, we add tests to avoid regressions (as well as the tests we create as we build features), so changes to a complex application tend not to break other things.
That gives us a huge safety net, of the kind you couldn't get from compiler checks (well, the Haskell/OCaml people might say otherwise, but you certainly couldn't get it from a Java compiler).
We can make big changes and be confident that we haven't broken things. I find it hard to imagine working on a largish application without the security a large, well-maintained test suite gives. Well, working effectively on one.
It doesn't save you from bugs - we (or our users!) still find things we didn't anticipate that break. But it helps.
I have a client whose codebase isn't well integrated into the IDE (eclipse) and so the code is essentially uncompilable on the desktop. I can't get the unit tests I write to work without major work, which I can't justify yet. So I work on fixing the project build path slowly, when I get the time.
I know you are looking for complete failures, but I have found that unit tests are not enough though they certainly help. I work in moderately large systems: 1 million lines of code, multiple services per server, around 100 servers per cluster. We have thousands of unit tests, but those cannot capture the interactions between all these components. So in addition, we have a few thousand integration tests that run the complete system against pre-defined scenarios. In addition, we run new versions of the system in parallel with production against carbon copies of the live data. And despite all that, we still have problems a few times each year.
In another really tiny project (2Kloc) I wrote 3 times as much test code as actual code. I even pushed the code coverage to 100% (at least according to gcov,) yet we still found a few bugs. That might be considered a success (if the defect ratio is acceptable to you,) or a failure (if you think that code coverage and unit testing is enough.)
Because some programmers on projects I've been on didn't understand Unit Testing or how implement it, and once shown, had the hand-slap-forehead moment. Also, the tool didn't really provide the greatest support. Not everyone can afford VSTS.
Also, I used to write console applications to test my code, until someone pointed out what I was doing, then I realized I'd been doing it all along.
For many, TDD comes across as backwards, until you show them the "hello world" of TDD with Unit Testing, light bulbs go on, red ropes part, and life is good.
I think the two biggest reasons for the absolute failure of our unit test environment on my team were:
1) No permission to add the unit tests as a post-build step for each library, ergo nobody ever ran them after the initial implementation, ergo they no longer compiled after a few months.
2) People mistaking the unit tests for test apps with all kinds of database connections, file streaming, etc., leading to extremely long running times that encouraged everyone to exclude everyone else's tests except their own from the executables.
About 18 months ago, I tried to add unit tests to a large pre-existing project, using ruby's Test/Unit (it's basically exactly the same as JUnit and NUnit). I'd used NUnit quite a lot before, but had never seen much benefit out of it, and was pretty jaded about the whole unit test/TDD thing.
I ended up with a bunch of 'shallow' tests, which just checked that the software did what it was doing, not that it did what it was supposed to do, and were very tightly coupled to the code.
After about a month or 2, these became useless and unworkable, and I deleted them all. My checkin comment was "EPIC TEST FAIL" or something like that.
About 12 months ago, I built a large new section of a website, but this time, wrote the unit tests as I went along, and used rspec. It's worked out as great success, having caught many subtle and annoying bugs, and it hasn't gotten in the way of refactoring the code. I attribute this to 2 things
- Writing tests at the same time as the code, not trying to shoehorn them on later
- Learning to focus on specifying desired behaviour, rather than asserting what already happened. I will forever be in debt to rspec for having taught me this. Go rspec!
Because the project didn't use unit testing from the start. Now it has grown so huge, that it becomes painful to write unit tests for the existing codebase.
Yes it works out for new stuffs (I've tried it and loved it), but the lack of enthusiasm of my seniors to enforce unit testing has made my effort becomes useless.
RWendi
In our case the development team was relatively new to the notion of refactoring as well. Hence, it was impossible to keep the tests clean and understandable. Instead of having duplicated code only in the application we had complex, duplicated and intertangled code in the tests as well.
If you don't keep your test code nice and DRY it may grow into a big ball of bad-smelling mud. This will probably only slow the project down instead of giving the team increased development velocity.
I didn't work out for me until I separated my project code into libraries. After that, it has been super-successful, because I've had time to maintain it. I currently have about 1100 tests that run in 4-6 seconds.
As it is now, if I add new functions to this library, I start by writing the tests -- because this allows me to decide how the functions should act, then I write the functions to suit these tests.
Also, for functions that e.g. converts a phone number to a valid MSISDN number, if I change it, I can run the tests and see if anything depends on the old behaviour -- see if they break existing dependencies and expecations of behaviour.
The only part that could be better is the database interface. It does all kinds of tests, but some it can't run (like sending voice and sms messages). If these were to work we need to set up a 100% up-to-date test system. But it would be possible.
Absolutely great for testing against code that has a long lifetime.
The most common reason it doesn't work is because the unit testing code is treated as a third class citizen.
That is the code is absolutely not written to be maintainable which quickly becomes a ball and chain. As more and more unit tests are added, older ones become obsolete and aren't fixed. Which means you end up with a mess of test code that's virtually useless and doesn't really test anything. But there's lots of it!
The key to success is to treat your unit testing code with the same respect as your production. Not any more, not any less, just the same.
As well, always decouple the testing code from the production code. As soon as you start adding testing code in your production code (rather than in your testing code), then you're asking for trouble...
On the one hand, the project succeeded - there were almost no code related failures. On the other hand, the build took almost four hours to run and there were thousands of tests, Selenium and JWebUnit based, which were incredibly difficult to fix when they broke. Why? Because clever and talented developers had seen redundancy and replication, and had built clever abstraction layers over the top of the tests, which had the unnoticed effect (until years later) of completely enshrining framework and data dependencies. We eventually tried to move to SeleniumGrid and it was almost impossible to parallelize the tests, as the interdepencies went so deep. (We 'fixed' the problem by buying a much more powerful build server. Let's hope Moore's law keeps up with our test proliferation).
So I will try again on the next project. But next to DRY and YAGNI and all those good old commandments, I'm putting up TORT:
Test Oughta Repeat Themselves.
Oh, and before anyone jumps in and says 'Selenium is FUNCTIONAL, not unit testing', we found that once the testing bug had spread we were testing UI from a browser with completely mocked out Controllers. Caught a lot of navigational and AJAX problems that way.
when I'm programming some algorithmic or mathematic stuff which is hard enough to design. I create my test first which force me to think "what I want to do" ? Then I create my solution, so I'm sure my job is done when my test pass. I no longer need to create a console application, or step in my code through debugger(or juste a little).
Unless my code is published (ie is a framework for public use), if the problem is trivial, I do not make unit test (if you are a good developper your code must be always trivial).
Sometimes, I do some test on my interfaces, so classes which inherit my interface have automatically some test to see if it's well implemented. (for example, if you implement IList on your class, make sure that any not null Add, increase Count by 1)
But I think unit testing are a really good documentation (I've learn many thing in the Unity framework by looking their test suite).
I see unit testing like an extension of compiler errors.
Because the developers did not understand requirements and specification of the business logic well enough to design correct test cases. It was especially hard for the developers to determine corner cases such as expected behaviors that make sense in business around the boundary condition. So most efforts have been spent on passing the specified threshold of the coverage.
So far, Unit Testing did not work out for my Projects on Microsoft Sharepoint. The issue is that essentially I am doing integration testing (I have no experience with mocking Frameworks, and mocking SPWeb and all it's dependencies looks like a behemoth).
If you want a nice little bullet point:
- too many additional dependencies
I still want to do proper Testing, be it Unit or Integration testing, but I really have to schedule some time for this.
I love unit-testing. It has saved me a lot of trouble.
But there are some situation where it doesn't apply, and in my case was when I wrote a relatively complex driver.
For simpler driver, I could create a mock object, which simulate the device's response.
But there are some devices, eg. Network Processor (NP), where it doesn't work well.
Of course we can simulate the NP, but the higher the layer you simulate it, the less effective the unit-test will be.
Some NP has a pretty good simulation, but I would call it an integration test, rather than unit-test, since there are so many layer of abstraction before I could test a specific object.
Every programmer has to ask themseleves the question: is there any bug that could be caught by unit testing that couldn't be caught just as easily by simply using the bloody software?!
Every hour you spend writing tests is an hour you could have spent fixing bugs.
I think unit testing can fail when people treat it as a first class citizen, which is (seemingly to me) opposite the thinking of many TDD folk.
By this I mean, that unit tests over time have their own maintenance costs, and bring people to fear re-factoring things because they would also have to refactor tests. Being able to intelligently refactor code with the least effort possible is to me of the highest importance.
If a unit test is starting to get in the way of needed changes to the real code, or taking up too much time maintaining at the expense of project development, get rid of it. You can bring something like it back later which will probably be better thought out anyway.
This is especially the case in the transition from a project undergoing heavy development, to maintenance - this is exactly when many tests start to become more a burden than a boon. In these cases you can think of the unit tests you have built as scaffolding around a building under construction, to be discarded as better ways of working on the building are brought in and day to day challenges appear that demand new tests.
- Team members didn't understand the purpose of unit tests, how to write them, or how to use the tools.
- Tests are written against concrete classes instead of mocked classes.
- Tests are written to ensure that code calling a web service works. When the web service breaks or isn't fully functional, the test breaks.
- Tests do unhappy things like pull every single user out of LDAP.
- Tests take too long to run and half of them fail, so we rarely run them.
- UI code is difficult to unit test.
- People fall into old ways of testing, such as testing the UI manually or creating their own command-line based test client.
- I have no way of enforcing that unit tests be written.
- Artificial deadlines leave us no time to write unit tests.
It does not work because:
- How do you unit test GUI? Stuff like GUI responsiveness is hard to unit test.
- Incomplete unit tests, some scenarios are missed and are not tested. Although this is usually programmer oversight, not the problem with unit test practice itself.
I wrote a few tests for my C++ code recently, which were just short int main()
functions that called some of my functions and checked the results with assert()
. I compile these tests using a Makefile and call them on make test
.
Would you call that unit test, too? Should I switch to one of the test frameworks? What would the advantages be for me?
I have had 3 major shifts in the goal posts for the project. The original scope is very fluid due to customer business model changing.
Unit tests written and re written during the first two changes were dropped in the third iteration due to time constraints and code base being largely rewritten.
I will seriously re-consider using unit tests in my next project without a VERY FIRM specification
Unit testing, which I was in favour of because it was an automated form of the regression testing I'd been using for years, failed on one significant project because nobody could tell me how to do it for an embedded, soft real-time environment. I was developing stereoscopic imaging for QuickTime and had no idea how to use unit testing or how to fake the environment.
With lots more experience, particularly working for a CTO who had a strong background in unit testing image processing applications, I would now at least write unit tests for the core algorithms by refactoring the code to be able to invoke them in isolation.
The creative step, it embarrasses me to say, that I didn't think to take was having image files saved of acceptable results and comparing with them.
It would still have required a lot of infrastructure being written, to populate data structures in the manner being filled by QuickTime and so I'm still not 100% convinced unit testing would have been suited to that project.
In summary - if you're embedded in a complex environment with rich data, mocking may be too hard.
Lazy: it's very hard and time-killing to update all test in a big and recently changed project
Junior developers: no experience with separation of concerns, modularity, mocking, interfaces
GUI: never found an usable solution for this layer
Legacy code
When working with a team the most difficult part with TDD is to get your other team mates to do TDD. Making a "cowboy coder" believe in TDD is daunting daunting task. They push back on TDD just because they are down right lazy & they just don't believe in writing loosely coupled code. TDD for the project crashed and burned because of the existence of lazy cowboy coders..!!
Having to deal with bloated application servers and other "Enterprise" software with its claws in everything that makes it hard to write independent tests.
The main reasons why Unit testing hasn't worked for me on the projects I have been on.
- Management has not bought into unit testing. They would rather developers spend time on writing the application than writing tests.
- Developers are not educated on writing unit test and what it takes to write a good unit test so that it can be run repeatedly without having to maintain it every time you run your test suite.
- Unit test build not being run nightly or rather not integrate continuous build on the project.
- When unit tests broke no one took the time to fix them.
These are all of my reason. I am a big believe in unit testing and I personally will not write a piece of application code without a unit test. There is a fine art to writing test so that they don't take up most of your development time.
I've been on a number of projects that employed unit testing. To a number of these projects, it added tremendous value. To others, it added little appreciable value. In no case, however, would I say the fault was with the concept of unit testing, however.
The cases in which an attempt at adding unit tests was either unsuccessful (meaning they didn't end up being written) or that they added little appreciable value were due to two primary factors:
* The tests were either written after all the code had been written, or we were trying to write tests for new code, but that was based legacy code which had been originally developed without unit tests. Such code almost invariabily is written in a fashion that maes writing unit tests extremely difficult. Typically it has a lot of stuff going on with components that all heavily depend on eachother and don't expose the things necessary to write tests. Overcoming this is a huge, huge undertaking and not something that was easily supported given the time and cost constraints of these projects
* Developers did not understand proper unit testing. Often, they would write "unit tests" that either did nothing more than execute a swath of code without ANY asserts. Even after trying to help teach these people proper unit testing, I'd still some unit tests that had very little real value.
In every project in which meaningful unit tests were written from the beginning, helping define and inform the code base, they added tremendous value. I've also had limited success on adding them to existing apps that previously lacked them, but this involved a great buy in from managment for the extra time required to refactor the code to accomodate them. In these cases, we didn't try to test the old code, only to refactor as little as possible so that the new code could have tests written for it. There is an important distinction there.
My guess is that unit testing often didn't work because it wasn't introduced at the beginning of a project.
With unit tests, it's very important to refactor your code so that it is easy to test. For example, a single class that has database access, business logic, and some display logic in it is going to be very difficult to unit test. This is easy to do when the code isn't writen yet. It's much less trivial when there's a huge chunk of existing code, and tests are being added after the fact.
Refactoring code without good unit test coverage is very difficult.
Unit tests are not only level of testing required to ensure confidence in the quality of your application. They are only the lowest level of testing; their primary value is to empower refactoring.
This is why adding unit tests to an existing body of code is often a very risky activity. If the code isn't written in a test friendly manner, refactoring it into a test friendly form is very difficult, because there are no unit tests. This chicken and egg problem can make progress very slow.
Solution: the first step is not to add unit tests.
My suggestion is that before adding unit tests to a legacy project, you should create a good functional test suite. This is easier to do, since functional tests are black-box in nature; test-friendliness of the production code is less of an issue.
Once you have good functional test coverage, you can tease separate concerns apart into domain layers (possibly rewriting a component here or there), while being confident that the application didn't break as a result of your changes. As you go, you write unit tests at the seams where you split components (or for rewritten components, TDD the whole thing).
Once this process is complete you will be left with a structure that is much easier to add unit tests to.
Characteristics of a good functional test:
Good functional tests are:
Black box. They don't require internal knowledge of how the app works. For example, rather than checking a database to verify that some data was recorded, consider checking another screen of the application.
Process oriented, in a narrative form. A good functional test doesn't look much like a good unit test. Rather than testing a specific class, you test a process. They have a longer, narrative form, with multiple steps. It's almost like explicitly testing a use case.
Non-functional. Good unit tests have a lot of the characteristics of functional programming. They don't have side-effects, they can be executed in any order, etc. The same is not true for functional tests. It's perfectly ok to require the steps inside of a functional test to execute in a specific order. You'll probably want a way to clean up after the entire suite is run, but individual functional tests can leave a little bit of mess behind.
Less specific failures. A good unit test will fail for only one reason, and one bug should cause exactly one test to fail. Failing functional tests may happen for several reasons, and one bug can cause many tests to fail; an app that can't connect to a database may fail every test.
Different life-cycle. You don't run functional tests TDD style as you develop. You do run your functional test suite once as you finish a feature, to demonstrate that it is finished. You also don't run them on your workstation, to avoid "works on my machine" syndrome. (Although you can run them on your workstation first to predict that they'll work in the test environment)
I'm in the "it didn't work for me because we didn't use it" (all of our apps our intranet web apps - ASP.net) camp. I also don't know nearly enough about it. Enough to know I want it.
I've been learning MVC and I can appreciate that part of the sell is test coverage. I pushed a little for our next project to use MVC but didn't get much in the way of positive feedback. I'm hoping to make Unit Testing a bigger focus once I'm more comfortable with MVC.
It's helped on all the projects I've used it on, but the "challenges" have been:
- only some developers on the team wrote tests
- tests are too brittle, and so they get commented out
- developers don't fix tests when they break because no one makes them
- tests take a long time so developers can't use them as actively
In my opinion, it is very sensitive on personalities + skills of programmers. You should check quality of tests and check what your team really does.
E.g. I have seen this kind of tests for whole project in team pretending development with SCRUM & TDD:
- Tests did not have any assertions (only logs).
- Nearly every test method contained try-catch block preventing their automatic failure in case of exception
- Test data were presented only on development machine of concrete developer
- Tests were not executed on regular basis
- Etc...
This is not failure of TDD, but I wanted to present how "unit tests" could be just next way how to throw money from window.
I can't say that there's ever been a time that unit testing did not work out for a project. There have been times (currently ongoing!) that portions of code will not be unit tested because the authors of the code did not care and made things hard to test. But in cases like this I just find a way around it and I cover my own code.
This always works for me. At release time I go home on time knowing my code works and the developers with the flaky code are fiddling around with something here and there until 10pm trying to fix things they didn't know were broken but QA noticed.
I often have trouble making unittests work in my scientific computing code because some of the algorithms require such complex inputs that simulating an input non-trivial enough to be worth testing or figuring out by hand what the correct answer is would be nearly impossible. Usually I still unittest the lower level stuff that requires relatively simple input so that at least I know that the higher-level algorithms are built on a solid foundation, and rely on sanity checks to make sure the results of the higher level code are reasonable.
Our unit tests were written by the same developers as the production code.
The production code was crap. The unit tests were worse.
Because we didn't test everything that seemed a bit too hard to test, and everything that wasn't our work ... which in the end led to something like 15% of really unit tested code...
"a bit too hard" here means code that you can't test input and output right away, because it's creating a bunch of objects behind, or accessing network resources, using timers, etc, etc
I believe the code wasn't modular enough, not really adapted to unit testing
Unit testing can't work when you are building the application from scratch, because you'll be focused on making it work, not breaking it, at first.
However, real-life situation for making unit test invaluable is refactoring. When application or part of it gets to the running condition, and users' not complaining are real proof of it, then you can say that behaviour of the system components is as it should be.
And then, you want to make some architectural changes, for various reasons, for example lifting some parts of code to meta-level, due to performance issues, due to obsolence of some library or part there of... You'll want to fix some part of existing behaviour as ''in stone'' and make unit tests for it, then refactor.
Unit testing has failed every time i try to start it because
- i can't find a book, blog, or article that explains how to apply it to my situations
- it requires complexifying up the code base
"Also, the unit testing in this case was largely a failure because the tests were mostly aimed at aspects of the application that were never going to change, and ignored the weakest points. In this case, the weak points were the availability of network resources and the state of the data in the production environment. All of this was mocked in the test suite, and essentially useless. This incredibly heavily unit-tested application failed miserably in the field."
Was tesing of network resources really a unit test? Maybe an integration test? Maybe a system test? You do have those, don't you?
I am not saying that load & stress test should not never be at unit test level, but generally not.
This (as you describe it) was not a failure of uint test, but a failure of your overall test startegy.
It wasn't my project, but even after extensive testing the system unexpectedly failed 4 years into release because threads were used. Search for "Ptolemy" here.