views:

601

answers:

11

I frequently hear "X% of software project fail due to bad requirements". The X in that statement has ranged from about 70 to 95. However, I seldom hear how requirements go bad. In fact, the statement itself suggests there were actually requirements.

What makes a "bad" requirement? How can one be avoided?

+2  A: 

First off, for a requirement to be valid it needs to be testable. If not, there is no possibility of tracking it, measuring it, reporting on it... this is a root cause of evil.

How can this situation be avoided? Make sure that a requirement:

  • is bounded in both time & resources (e.g. $)

  • testable

Or else, you are working on an "open loop" and I am sure you can appreciate the consequences.

Note that sometimes requirement come in rather with a "qualitative" nature: it is up to the product manager/team to define a "quantitative" definition for it.

jldupont
+2  A: 

I think you will find that if you interpret it as follows it will make more sense:

"X% of software projects fail due to bad definition of requirements"

There are a lot of things you can do

  • Make sure you can actually test the requirement
  • Make sure that the analyst actually understands what the user really means. Often what the user asks for is not what they actually want.
  • Make sure the developer understands the requirement. If the developer gets a bad spec and has to make an assumption that turns out wrong, then time is going to be wasted when the programmer has to correct that assumption, on top of the usual bugs.
  • Make sure the user actually tests that their requirement(s) have been met. Better (discovered) late than never.
Dan McGrath
+1  A: 

In addition to impossible/impractical or unverifiable requirements, the "bad" likely refers to incorrectly gathered - they requirements you have don't match what is actually needed for the application. One source of this is that users frequently don't actually know what they need or want.

Michael E
+3  A: 

Whenever I see those statistics, I am reminded of expensive, top-heavy, waterfall projects where the first release was completed and presented to the customer, who quickly told the project, "This isn't really what I wanted."

That's why most successful projects nowadays are done using an "iterative" model, where the customer is constantly involved in the design process.

In this context, the "requirements" are more loosely defined, and they evolve somewhat as the project progresses.

Robert Harvey
+4  A: 

A big portion of agile development methodologies is to accept the fact that requirements WILL change.

Therefore, you should not try to fight that and instead create a process which embraces that.

Chi
+12  A: 

For successful requirement elicitation you need to

  • have your customer on site, discuss the requirements, let him explain them to you
  • the requirements have to be testable, verifiable. Having a list of them, at the end you should be able to go over the list and directly verify their correct implementation on the end-product.
  • they should have an appropriate level of detail. There exist different type of requirements (goal-level, domain-level, product-level, design-level). Requirements should be classified appropriately.

Usually the problem lies in a lack of communication and understandability between the customer and the developer. Moreover keep in mind that sometimes even the customer itself doesn't exactly have a good picture of what he wants. Therefore discussion, paper prototypes etc. are really important.

This pic is my favourite :) alt text

Juri
+1 I've seen this graphic before. I wonder who originally made it.
User1
@User1: you can create one yourself at http://www.projectcartoon.com/. Have fun ;)
BalusC
The most interesting thing started to happen when my 4-year daughter asked to explain what these pictures mean ;-))).
Roman Nikitchenko
+1  A: 

Probably they mean "miscommunicated" requirements.

If you think about it, there are many ways you can mis-state requirements, either intentionally or otherwise. Some ways to deal with the problem:

  • Realize that the requirements of a system can change continuously. Otherwise the client will say "yeah, that changed, nobody told you?"

  • Ask the requirements of several key people - It;s not enough to ask the CEO, and likewise, it's not enough to ask the lower ranks that will actually be using your system.

  • Make sure there are a handful of people accountable for communicating the requirements to you - These people (no more than 5 in a mid-size project) should have a BIG incentive to give you all the information for a successful implementation. If you do not have these people, you are likely to fail as everyone will be too busy to explain things to you, and they will have an incentive not to talk to you, since you will be able to claim X person told you to implement the system they way you did. You need management's support in creating this group of people.

  • You need to verify assumptions with other people. Sometimes you need to ask the same question five different ways.

  • Be afraid of absolutes... "The sale price cannot be changed" sometimes means "I'd like an supervisor override to be implemented in case a price needs to be changed for the current customer."

  • Understand the business process as much as possible. If you are writing a banking application ask to spend a day at the bank to see how people would use the system. If you deliver a phase of the project do the same thing: Watch the system being used, and be proactive in looking for holes.

  • Recognize when something is not specified in enough detail and insist on getting it right. Do mockups, hand drawings, flowcharts, whatever it takes to make sure the source of the requirements and you are on the same page.

These are all just from experience... I think "bad requirements" really means "bad communications between client and implementor."

gmagana
+2  A: 

In agile, we use the acronym INVEST. Stories (which stand in for requirements) should be:

  • I - Independent
  • N - Negotiable
  • V - Valuable
  • E - Estimable
  • S - Small
  • T - Testable

Requirements are not an artifact to be handed to you from a mountaintop. They are a living byproduct of a process of discovery and conversation between you and your customers (or their proxies).

shoover
+1  A: 

My experience show next possible sources of bad requirements:

  • Users / clients often don't know what they want. The possible way to handle this either means have good business analysts who could perform required analyze or have good ready product which could suit this user (or not).
  • Analysts can't provide appropriate quality of requirements. Yes, it happens. Hire better analysts / technology experts but before failure, not after. Test requirements, analyze usage cases, draw state an sequence diagrams as early as possible to understand user cases coverage and so on. In other word this is related to general modelling.
  • Well, there is possibility of bad translation from marketing requirements / model into technical specifications.
  • Design quality problem (implementation can't meet requirements).

What should be done to overcome these problems? Let's allow engineers to provide feedbacks, let's not close requirements and make them flexible as possible. Often even with generally good consistent requirements we face some low level hardware limitation on implementation stage and need to track changes back. From other side let's understand customers, not only technologies. I saw number of projects with large parts of work thrown away just because they look good for developers but not for customers. The better communications with customer you have the lower is possibility of such cases.

My understanding is process should allow flexible requirements change during all the stages but from other side should make all of this work trackable and limit scope to minimum that required. The problem is to balance between all of this. At least my suggestion is we should move to shortest development cycles to lower all the risks.

Roman Nikitchenko
+1  A: 

One of the most valuable things that a development organization can do (but is rarely done) is to validate the requirements. Mock up a design, as quickly and inexpensively as possible, and review it with the customers. If at all possible, do it in a way that the review can be structured as a task walkthrough, so developers and users together can walk through use cases and decide whether the proposed design solves the problem. Then, if necessary, do it again.

There's an excellent book on gathering and understanding requirements called User and Task Analysis for Interface Design by JoAnn Hackos and Janice Redish. It's a big book, but it's very readable and filled with practical tips and tools.

Cylon Cat
A: 

What makes a bad requirement? One that's not there

I see a lot of good answers here about the a bad requirement being one that is miscommunicated or half-baked. And they're probably correct.

But for me one of the worst types of "bad requirement" is the one that's simply missing. I see this time and again in systems. A day after going live, the users say, "Oh, what about XYZ? We really need that." To which the project team responds, "XY what? We've been working on this project for a year and NOW you tell us?"

Why is it bad?

This is a killer because now everyone has to scramble and rush out a solution, not that the average developer needs any help promoting half-assed things to production, but you just know it's going to spell lots of production support for all the poor people this 'solution' is handed over to for maintenance...you know, the ones that didn't get project bonuses.

Again, this is not a bad requirement but one that was never a requirement to begin with. That doesn't mean it's invalid; it most certainly could be critical. But between the rush to get things done and an aggressive project pace, and the fact that we're all humans and we make mistakes, this was overlooked.

How do you avoid it?

You could spend more time up front and hope a sharp subject matter expert picks up the missing gap. A more effective and more costly method is taking the time to engage what some call a "model office" phase. This is like a system test, but designed to simulate real life conditions. The testers aren't just verifying that the system delivers correct output for 1 + 1, but that all its parts work within the context of the business process.

This is a hard sell of course. Many projects will give business analysis and testing the short shrift in order to uphold the almighty metrics of "on time and on budget". But if you want to shake these missing requirements out, you have to let the user run with it. It's then that they'll recognize things they took for granted in a verbal requirements definition session. Agilists would add that this test needs to be done as early and as often as possible to uncover these risks and give the project team time to identify their priorities and make adjustments where warranted.

Bernard Dy

related questions