+1  A: 

Automated unit testing brings a lot to the table. We've used it on several projects. If someone breaks the build, everyone immediately knows who did it and they fix it. It's also built into the later versions of Visual Studio. Look into

Test Driven Development

It should save you a lot of time and doesn't produce a significant amount of overhead. Hope this helps! If so, mark it.

pixelbobby
We're Java, and using Hudson hourly to pull code from version control, run the build and tests on the build, and email us if anything is awry. It's a *great* addition.
Dean J
+2  A: 

IMO, if there is enough to give someone who inherits the code an idea so that they can start making changes, whether that be fixing bugs or putting in enhancements, without having to spend days reading the code to get it, that's my suggestion.

Thus, don't test everything to death, but do cover some common cases and a few edge cases just to see what happens if things don't go as laid out initially.

JB King
+13  A: 

This, I think, is a fallacy:

If you test every class, every method, your current release will take longer, possibly much longer.

Testing - especially Test First - improves our flow, keeps us in the zone, actually speeds us up. I get work done faster, because I test. It is failing to test that slows us down.

I don't test getters and setters; I think that's pointless - especially since they're auto-generated. But pretty much everything else - that's my practice and my advice.

Carl Manaster
+1. It is an infuriating fallacy that has altogether too much traction.
womp
Agreed entirely. While writing tests may slow down the initial code (as in, the first few hundred lines), I find that it speeds up everything and anything past that.
kyoryu
+4  A: 

What was adviced to me is this:

  • Try as you think ; After a while, evaluate yourself:
  • If testing spend more time than you felt was reasonnable, and you had too little return over investment, test less.
  • If your product was tested enough and you lost time, test more.
  • Loop as needed.


Another algorithm: :-)

  • Some testing is really easy, and really useful. Always do this, with high priority.
  • Some testing is really hard to set up, and rarely come useful (for example, it could be duplicated by human testing that always happen in your process). Stop doing this, it's losing your time.
  • In between, try to find a balance, that may vary as time goes depending on the phases of your project ...


UPDATED for the comment, about proving the usefulness of some tests (the ones that you firmly believe in):

I'm often telling to my younger collegues that we, technical people (developpers and the like), have a lack in communication with our management. As you say, for management, costs that are not listed do not exist, therefore they avoiding them cannot serve to justify another cost. I used to be frustrated about that also. But thinking about it, that is the very essence of their job. If they would accept unnecessary costs without justification, they would be poor managers !

It's not to say that they are right to negate us these activities, that we know are useful. But we first have to make apparent the costs. Even more, if we report the cost in an appropriate way, the management will have to make the decision we want (or they would be bad managers ; note that the decision may still be prioritized ...). So I suggest to track the cost so that they are not hidden any more :

  • In the place where you track the time you spend, note separately the costs that come from the code being untested (if not available in the tool, add it as a comment)
  • Aggregate those costs on a dedicated report if the tool doesn't, so that each week, your manager reads that X% of your time was spend on that
  • Each time you evaluate loads, evaluate separately several options, with or without automated testing, showing the time spend on manual testing or automated testing is about the same (if you limit yourself to the most useful tests, as explained earlier), while the latter is an asset against regressions.
  • Link bugs to the original code. If the link is not in your process, find a way to connect them : you need to show that the bug comes from having no automatic tests.
  • Accumulate also a report of those links.

To really impact the manager, you could send them every week a spreadsheet up to date (but with the whole history, not only for the week). SpreadSheet gives graphics that give immediate understanding, and let the unbelieving manager get to the raw numbers...

KLE
The problem that I have is proving that to an outside group; playing it by ear is fine, but I need to justify my decision, and it's *hard* to measure "return on investment", because I don't have the results of the choices I didn't take.
Dean J
+5  A: 

Start creating unit tests for the most problematic areas (ie sections of code that often breaks and causes a lot of communication between the sales team and developers). This will cause an immediate and visible impact by the sales team and other personnel.

Then once you have credibility and they see the value, start adding less problematic areas until you start to notice that the ROI just isn't there anymore.

Sure full coverage is nice in theory, but in practice it's often not necessary. Not to mention too costly.

Stephane Grenier
+2  A: 

Test enough so that you can feel comfortable that a bad refactor will be caught by the tests. Usually, its enough to test logic, and plumbing/wiring code. If you have code that is essentially getter/setters, why test them?

regarding the sales guy's opinion that testing isnt needed - well, if they know so much, why dont they do the bloody coding?

Chii
I've said the same thing to sales, but honestly, they're not going to "get it" unless I put some rationale behind it.That said, sales has short-term incentive in most organizations I've been in. Developers have long-term incentives, usually; there's not a cash bonus to get a release out faster, but there's always a bonus to have to work less-hard in the long run if you're doing it more efficiently.
Dean J
+3  A: 

The "cost" is paid during development, when it is much more cost effective, and the return is realized during ongoing maintenance, when it is much harder and expensive to fix bugs.

I generally always do unit testing on methods that:

  • Read/write to the data store,
  • Perform business logic, and
  • Validate input

Then, for more complex methods, I'll unit test those. For simple things like getter/setters, or simple math stuff, I don't test.

During maintenance, most legitimate bug reports get a unit test, to insure that the specific bug will not happen again.

BryanH
+3  A: 

I always believe in not being extreme. In particular when time and energy are limited. You just can't test it all.

Not every methods/functions need a unit test. The following might not need. (1) The one that is clearly not complex like just get/set, little condition or loop. (2) The one that will be called by other method that have unit tests.

With these two criteria, I think you can cut a lot of those.

Just a thought.

NawaMan
You *always* believe in not being extreme? Oh the irony!
mgroves
Hahahaha .........
NawaMan
+13  A: 

Two suggestions for minimal unit testing that will provide the most "bang for the buck":

Start by profiling your application to find the most commonly used parts - make sure those are unit tested. Keep moving outward to the less commonly used code.

When a bug is fixed, write a unit test that would have detected it.

Nate
I might actually push for this one; when we find a bug, budget time not only to fix it, but to test it so it doesn't become a bug again. That seems easy to argue for; it might be reactive instead of proactive, but it's very much a good start.
Dean J
A: 

For unit testing, my company has adopted a fairly good strategy: we have a tiered application (Data Layer, Service Layer/Business Objects, Presentation layer).

Our service layer is the ONLY way to interact with the database (via methods in the data layer).

Our goal is to have at least a basic unit test in place for each method in the service layer.

It's worked well for us - we don't always thoroughly check every code path (especially in complex methods) but every method has it's most common code path(s) verified.

Our objects are not unit tested, except incidentally via the service layer tests. They also tend to be 'dumb' objects - most have no methods except those required (such as Equals() and GetHastCode()).

Jeff
Any tests on the presentation layer?
Dean J
Not other than manual. We do fairly rigorous smoke testing on each fix, though.
Jeff
A: 

The purpose of developer testing is to speed up the development of completed software of an acceptable level of quality.

Which leads to two caveats:

  1. it is perfectly possible to do it wrong, so that it actually slows you down. So if you find it slows you down, it is very likely the case that you are doing it wrong.
  2. your definition of 'acceptable quality' may differ from that of marketing. Ultimately, they are right, or at least have the final say.

Software that works is a specialised, niche market, equivalent to high-end engineered hardware made from specialist expensive materials. If you are outside that market, then customers will no more expect your software to work reliably than expect their shirt to stop a bullet.

soru
A: 

How much unit testing is a good thing :

Unit Testing is not static that once you have done and your job is complete, It will go on through out life of product until you do not stop further development on your product

Basically Unit Testing should be done each time : 1) you do a fix

2) New Release

3) Or you find a new Issue

I have not mentioned development period, as during this period your unit level test are evolved.

Basic thing here is not Quantity (How much) but coverage of your unit test

For example : For your application you fond a issue an particular function X, You do a
fix for X, If no other module is touched you can do unit testing
applicable for module X , Now this is point how much unit testing for X
cover

So Your unit Test must check:

1) Each interface

2) All input/out operations

3) Logical checks

4) Application specific results

sat
A: 

I'd suggest picking up the book The Art of Unit Testing. Chapter 8 covers integrating unit testing into your organization. There's a great table (p. 232) that shows the results of a two-team trial (one using tests, one without); the test team shaved two days off their overall release time (including integration, testing, and bug fixing) and had 1/6 the bugs found in production. Chapter 9 discusses test feasibility analysis for getting the most bang-for-the-buck with legacy code.

TrueWill
If only for the empirical data, that seems worth my cash. Thanks!
Dean J
A: 

While it is possible to over test (point of diminishing returns) it's hard to do so. Testing (particularly testing early in the process) saves time. The longer a defect stays in a product the more it costs to fix.

Test early test often and test as completely as is practical!

Jim Blizard
A: 

While unit testing is useful, you should definitely have a system test plan for every release - this should include testing the normal use-cases of your application (for regression) AND the specific feature being worked on in more depth.

Automated system testing is pretty much vital to avoid regressions - unit tests can all pass and your app will still be a crock of dung.

But if you can't do automated system testing for all of your use-cases (most applications have complex use cases, particularly where interacting with 3rd party systems and user interfaces), then you can run manual system testing.

User interfaces create the main problems - most other things can be automated relatively easily. There are heaps of tools to auto-test user interfaces, but they are notoriously brittle, i.e. in every release the auto-tests need to be tweaked just to pass (assuming no new bugs).

MarkR
+1  A: 

On Podcast #41 of StackOverflow Jeff and Joel discuss about TDD coverage with Uncle Bob Martin. Was a great piece of advice. Read the transcript or listen the podcast. I think it will be really useful to everyone interested in this question.

JuanZe
A: 
    $ wc -l *.h *.cpp | grep total
      481 total
    $ wc -l tests/*.h tests/*.cpp | grep total
      548 total

Did I go overboard? :P

Nicolás