Why is it more costly to discover a defect later in the process?
I've heard this a lot but I struggle to understand and put context/examples to this.
Why is it more costly to discover a defect later in the process?
I've heard this a lot but I struggle to understand and put context/examples to this.
Because more people will have spend time with the defective software.
If you fix a bug at early on you and maybe a code reviewer will spend a little time on it.
If it gets released to customers and reported as an error, you will have coded it, someone may have reviewed it, someone may have tested it, somebody may even have documented it and so forth ...
The later you find out a bug, the worse. When you find a bug immediately after you code, you have all the behavior in mind, and know exactly what changes caused it. You will be able to focus on the problem, once you know where it resides.
When you take long, developers no longer remember exactly how it worked, and there are many more places to investigate to find the bug. Perhaps the developer who coded the bug is no longer working in the company also.
Also, as time goes by, more parts of the code will probably depend on the buggy code, and you may need to fix them as well.
Finally, there are issues involving users. If you find a bug after a release, more users will be frustrated by it, and your product image will be worse. Users may also be used to have a workaround for this bug, which may start to fail after you fix the bug.
Summary: When you take long to find a bug
The longer it takes to find a bug, then:
the more the behavior of the bug may have been accepted as correct, and the more other things may have become dependent on that behavior (Windows is notorious for this).
the more tightly integrated the system is likely to have become, and the harder the bug will be to extract.
the higher the likelihood that the bug's erroneous behavior will be duplicated elsewhere by virtue of copy-pasting or in clients that use the erroneous code.
the longer it's been since the code was originally written and the harder it may be to understand it.
the less likely it will be for people who understand that original part of the system to be around to fix it.
You're building a house. You're laying the sewer pipes into the foundations, but unknown to you one of the pipes is blocked by a dead hedgehog.
Would you rather find out:
(There's a "Stack Overflow" joke somewhere in this analogy . 8-)
There may be other dependencies (internal or external) which will affect the fixing of a defect.
For example - If I resolve this defect, I may have to fix something else
Imagine you're writing an essay on why it's more costly to discover a defect later in the process, and you suddenly realise one of the premises on which most of your essay content is based is false.
If you're still planning, you only have the half a page of plan to change. If your essay is nearly finished, you suddenly need to scrap the lot and start over. If you've already handed it in, the error is gonna cost you your grade.
Same reason.
This can be illustrated in a simple (if not trivial) example.
Take a simple dialog with a message and just two buttons "OK" and "Cancel".
Assume that the error is a spelling mistake.
If this is found after the product is released then a new version of the product has to be released with all the costs associated with that. Manuals will need to be reprinted.
If this is found in final testing the manual will have to be reprinted. The code will need to be rewritten and tests re-run.
If this is found during development then there is just the cost of fixing the code.
If this is found during design then the code is written correctly first time - no cost.
For a shrink-wrapped software product: If you find a bug after your product hits the stores, you will have to help users through support calls, suggest a workaround or even recall the product/issue a service pack.
For a website: Site outages and delays cost you money. Customer loss as a result of poor/malfunctioning site costs you more. The debugging process is also costly itself.
Because of the development process and all the work involved in fixing the defect.
Imagine you find a problem in the function you coded yesterday, you just check out, fix, check in, period. It's still fresh in your mind, you know what it is about and that your fix won't have any side effect.
Now imagine finding the same bug in six month from now. Will you remember why the function was coded that way ? Will you still be working on this project/company ? You have to open a defect report, a new version of your software have to be issued, QA needs to validate the correction. If the software has been deployed, then all instances have to be upgraded, customers will call support ...
Now it's true that the curve showing the cost are made up to illustrate the point; it actually depends on the development process.
It is probably an error by the question author, but the actual question is, "Why is it more costly to discover a defect later in the process" Within that question is the cost to discover the bug and we can hope it also means to fix it. Most of the answers do a good job at describing the cost to fix and why it is better to fix early versus fix later. And, I really don't disagree with any of them. But, that isn't the whole question.
I have a regular series of esoteric arguments with some about the discovery cost. How much testing would have been required to find a specific bug (without hindsight). Would it have take 3 man-months more of automated or manual testing before you would have been likely to find that test case and scenario ?
In practice, test as much as you can but finding that balance point isn't as easy as many would have you think. Most programs are too big to have 100% code coverage. And, 100% code coverage is usually just a fraction of all the possible scenarios the code must handle.
Another factor that comes into the cost of a bug is the business cost associated with the bug. Are there 5 million boxes out there holding the bug ? Would you have to do a product recall ? Will it generate X calls to your warranty help desk ? Will it trigger some clause in a contract holding you liable for damages. In very simple terms, this is why software written in the medical field costs more per LOC than those for website development.
I would say that the most costly is to find a defect and let it be. The longer you allow the defect to live the more costly it becomes.
I was at a company at a time, where they had the policy, that once they had taken a decision, they stick with it. The system I worked on was loaded with bugs because of a stupid corporate framework that we were forced to use, and a deep misunderstanding of the proper usage of web services.
To this day, I believe that the cheapest way for that company to get a working, usable system, would be to ditch the entire system and rewrite it from scratch.
So my point is, that I don't think that finding a defect at a late stage is that problematic. But ignoring a defect until a late stage is extremely problematic.