views:

394

answers:

9

UPDATED: I'm asking this from a development perspective, however to illustrate, a canoical non-development example that comes to mind is that if it costs, say, $10,000 to keep a uptime rate of 99%, then it theoretically can cost $100,000 to keep a rate of 99.9%, and possibly $1,000,000 to keep a rate of 99.99%.

Somewhat like calculus in approaching 0, as we closely approach 100%, the cost can increase exponentially. Therefore, as a developer or PM, where do you decide that the deliverable is "good enough" given the time and monetary constraints, e.g.: are you getting a good ROI at 99%, 99.9%, 99.99%?

I'm using a non-development example because I'm not sure of a solid metric for development. Maybe in the above example "uptime" could be replaced with "function point to defect ratio", or some such reasonable measure rate of bugs vs. the complexity of code. I would also welcome input regarding all stages of a software development lifecycle.

Keep the classic Project Triangle constraints in mind (quality vs. speed vs. cost). And let's assume that the customer wants the best quality you can deliver given the original budget.

+2  A: 

I think the answer to this question depends entirely on the individual application.

Software that has an impact on human safety has much different requirements than, say, an RSS feed reader.

17 of 26
+1  A: 

To expand on what "17 of 26" said, the answer depends on value to the customer. In the case of critical software, like aircrafct controller applications, the value to the customer of a high quality rating by whatever measure they use is quite high. To the user of an RSS feed reader, the value of high quality is considerably lower.

It's all about the customer (notice I didn't say user - sometimes they're the same, and sometimes they're not).

Ben Collins
Agreed. But let's assume the customer wants the best quality their agreed-upon budget can get them, AND they want an RSS feed reader.
Troy DeMonbreun
Yes, of course a customer wants the best quality possible for a given budget. That kind of misses the point: the budget is driven by the actual value of that quality. That's why military systems are expensive, and consumer systems aren't (relatively speaking), for instance.
Ben Collins
...and similarly, the value of the world's best RSS feed reader is almost zero, which is why very few people are willing to pay for one, regardless of how good it is.
Ben Collins
+6  A: 

There's no way to answer this without knowing what happens when your application goes down.

  • If someone dies when your application goes down, uptime is worth spending millions or even billions of dollars on (aerospace, medical devices).
  • If someone may be injured if your software goes down, uptime is worth hundreds of thousands or millions of dollars (industrial control systems, auto safety devices)
  • If someone looses millions of dollars if your software goes down, uptime is worth spending millions on (financial services, large e-commerce apps).
  • If someone looses thousands of dollars if your software goes down, uptime is worth spending thousands on (retail, small e-commerce apps).
  • If someone will swear at the computer and looses productivity while it reboots when your software goes down, then uptime is worth spending thousands on (most internal software).
  • etc.

Basically take (cost of going down) x (number of times the software will go down) and you know how much to spend on uptime.

Chris Upchurch
But what if the person swearing at the computer decides to destroy a million dollar server out of anger?
Jason Baker
+2  A: 

The project triangle is a gross simplification. In lots of cases you can actually save time by improving quality. For example by reducing repairs and avoiding costs in maintenance. This is not only true in software development.Toyota lean production proved that this works in manufacturing too.

The whole process of software development is far too complex to make generalizations on cost vs quality. Quality is a fuzzy concept that consists of multiple factors. Is testable code of higher quality than performant code? Is maintainable code of higher quality than testable code? Do you need testable code for an RSS reader or performant code? And for a fly-by-wire F16?

It's more productive to make informed desisions on a case-by-case basis. And don't be afraid to over-invest in quality. It's usually much cheaper and safer than under-investing.

Mendelt
Agreed, the Project Triangle is definitely an oversimplification on many counts.Your point about Lean is well taken, but I think it may be worth pointing out that someone who does Lean Development well wouldn't be cheap, so you're still somewhat forced to sacrifice on the 3rd constraint (cost).
Troy DeMonbreun
+2  A: 

To answer in an equally simplistic way.. ..When you stop hearing from the customers (and not because they stopped using your product).. except for enhancement requests and bouquets :)

And its not a triangle, it has 4 corners - Cost Time Quality and Scope.

Gishu
Scope is a good point. Indeed, I could quickly produce a quality piece of software that only generated prime numbers for a low cost.Instead of 4 corners, I like to think of it in 3D - a pyramid.
Troy DeMonbreun
+1  A: 

Chasing the word "Quality" is like chasing the horizon. I have never seen anything (in the IT world or outside) that is 100% quality. There's always room for improvement.

Secondly, "quality" is an overly broad term. It means something different to everyone and subjective in it's degree of implementation.

That being said, every effort boils down to what "engineering" means--making the right choices to balance cost, time and key characteristics (ie. speed, size, shape, weight, etc.) These are constraints.

Dr. UNIX
A: 

Thanks for all your answers so far. I'd vote you guys up already, but I've used up my 30 limit and it won't reset for another 5 or so hours. :-(

OK, time for a bit of a tangent: Regarding the importance of uptime depending upon the criticality of the application...

So, let's say that the client will loose $50 per minute while the app is unavailable. However, the client will not pay more than $100,000 for the application and say that analysis determines (analysis is always right, of course) that for $100,000 you can only build an app that will risk as much as 12 hours of downtime per year (due to its architecture), a potential yearly cost of $36,000.

The client, of course, would likely balk at that number and might say no more than 1 hour of downtime per year is acceptable. That's 12 times more stable. Do you tell the customer, sorry, we can't do that for $100,000, or do you make your best attempt, hoping your analysis was conservative?

Troy DeMonbreun
+3  A: 

The client, of course, would likely balk at that number and might say no more than 1 hour of downtime per year is acceptable. That's 12 times more stable. Do you tell the customer, sorry, we can't do that for $100,000, or do you make your best attempt, hoping your analysis was conservative?

Flat out tell the customer what they want isn't reasonable. In order to gain that kind of uptime, a massive amount of money would be needed, and realistically, the chances of reaching that percentage of uptime constantly just isn't possible.

I personally would go back to the customer and tell them that you'll provide them with the best setup with 100k and set up an outage report guideline. Something like, for every outage you have, we will complete an investigation as to why this outage happened, and how what we will do to make the chances of it happening again almost non existent.

I think offering SLAs is just a mistake.

Ethan Gunderson
I agree - tell customer to take a hike. They don't want to pay for that uptime they can't have that uptime.
Jason Short
+4  A: 

The Quality vs Good Enough discussion I've seen has a practical ROI at 95% defect fixes. Obviously show stoppers / critical defects are fixed (and always there are the exceptions like air-plane autopilots etc, that need to not have so many defects).

I can't seem to find the reference to the 95% defect fixes, it is either in Rapid Development or in Applied Software Measurement by Caper Jones.

Here is a link to a useful strategy for attacking code quality: http://www.gamedev.net/reference/articles/article1050.asp

torial