I got the impression that some problems are just to hard to unit test. And even if you do it, often such tests provide little value.
What code should not be unit tested, apart from getters and setters?
(might be similar to this question)
I got the impression that some problems are just to hard to unit test. And even if you do it, often such tests provide little value.
What code should not be unit tested, apart from getters and setters?
(might be similar to this question)
Automated unit tests can't be run on graphics code, since the computer cannot decide if what's being drawn to the screen is actually correct or not! Although you can write a manual unit test in that case, of course
This is a question of cost and benefit, the closer you try to get to 100% the more expensive it will be.
There is also the UI layer, if this is in a technology that is difficult to test, you could program this layer such that it is as thin as possible, and then only test it manually.
Depending upon your situation you could drop testing pass-through layers and generated code.
Note that it is not just a question of code coverage but how you test, it may be better to have many tests on a limited part of the code and a lower code coverage.
My general approach is "if this code is not worth testing, why is it worth having in the first place"? If I'm using a language which forces me to have a lot of uselessly repetitive boilerplate, then maybe I don't need to test those parts if the language's compiler can just check them; but I normally use languages where code I write is actually meaningful;-).
Can you given an example of problem that's too hard to unit-test? I've heard this used as an excuse to avoid testing error-recovery and diagnostic code that's only triggered by rare and very unlikely circumstances, but every time this has come up I've argued that, on the contrary, that code is the one most needing unit tests, because it's not going to get exercised in the integration tests and normal use (e.g. at QA stage).
Dependency Injection lets you use a fake or mock object to stand for (whatever "should never cause this error but we're covering for it anyway" -- network, database, power control interface, etc), and your fake or mock easily can and definitely should cause fake errors of all kinds so you can thoroughly check that error-recovery and diagnostic code.
Maybe it depends on what kind of apps you write -- for the last few years I've been mostly in cluster-management software, where everything that can go wrong will, lots of things that can't possibly go wrong will anyway, and uptime and fast recovery are crucial. In that field nobody would ever dare argue against a belt-and-suspenders approach (if they did the reliability engineers would be after them with cudgels;-).
But I've recently switched to Business Intelligence and I've noticed the approach translates well, too: if the numbers my code is producing (maybe to show as a nice graph to business decision makers, etc) are worth producing at all, they'd better be accurate, which means (among other things) that the code producing them needs to be tested just as thoroughly and carefully as that which monitors a network or a power supply system!-)
You shouldn't write unit tests for other people's code (such as a framework you are using). You should only write tests for your code. Mock out dependencies on other people's code so that you only need to write tests for yours.
On my current project I'm doing automated testing, of features and of system functionality, but no unit testing at all: Should one test internal implementation, or only test public behaviour?
Some people talk as if the only alternative to unit-testing is ad-hoc manual testing; but many of the benefits (e.g. regression testing) come from testing being automated, and not necessarily from it's being at the unit level.
You don't need to test the language constructs, but outside of that, there's really not anything that "shouldn't" be unit tested.
If there are cases where you've already got the design, and a good reason for it to exist, and it's not a mission critical part of the application, such as a minor user interface feature, then a case can be made that it's not necessarily worth fighting to produce a unit test for. But that's not necessarily the same as "shouldn't" be unit tested.