UPDATED: I'm asking this from a development perspective, however to illustrate, a canoical non-development example that comes to mind is that if it costs, say, $10,000 to keep a uptime rate of 99%, then it theoretically can cost $100,000 to keep a rate of 99.9%, and possibly $1,000,000 to keep a rate of 99.99%.
Somewhat like calculus in approaching 0, as we closely approach 100%, the cost can increase exponentially. Therefore, as a developer or PM, where do you decide that the deliverable is "good enough" given the time and monetary constraints, e.g.: are you getting a good ROI at 99%, 99.9%, 99.99%?
I'm using a non-development example because I'm not sure of a solid metric for development. Maybe in the above example "uptime" could be replaced with "function point to defect ratio", or some such reasonable measure rate of bugs vs. the complexity of code. I would also welcome input regarding all stages of a software development lifecycle.
Keep the classic Project Triangle constraints in mind (quality vs. speed vs. cost). And let's assume that the customer wants the best quality you can deliver given the original budget.