views:

2227

answers:

29
+68  Q: 

Requirements Smells

We've all heard and debated about Code Smells, and occasionally you hear people talking about "design smells", so I've been thinking, why not take it one step further?

If you deal directly with customers or have to interpret functional requirements, you'll be familiar with the sensation: It sounds like a really bad idea, it's probably completely unnecessary, it's almost certainly never going to work reliably, it will be a massive headache to maintain, and will likely be abandoned after several painful weeks or months of design, development, debugging, more debugging, even more debugging, and finger-pointing stakeholder meetings.

As with all smells, there may be legitimate reasons for some of these requirements, but much of the time, they appear to be the result of someone with little technical or design experience getting halfway through a solution and then dumping the other half - the half that makes no sense - on you and your team. They are the architectural equivalent of "I need a regular expression to validate e-mail addresses."

A couple of examples from my career, to get people started:

  • Customization of business logic as an end goal - i.e. so the company or individual "doesn't need a programmer to make changes".
  • Performance/scale requirements with no metrics, or metrics that are way out of whack with the state of the business (the company has 10 customers and the system must support 10,000 transactions per second);
  • Requirements expressed as [poor] implementation details (i.e. must use Excel files as a permanent data source);

What are some "requirements smells" that you've learned to recognize?

+14  A: 
  1. A use case with no Actor. Happens all the time. Sometimes there will be an actor named "User" but it amounts to the same thing. Not a description of anyone's job or goals. Just a statement of things that the system does.

  2. A use case with no Interaction. The use case has two steps, one of which is a placeholder for the actor "initiating" things and the rest of the use case is massive traditional specification for things the system does. No reference to what the actor might do with that or how all of that creates value or satisfies anyone's goals.

S.Lott
Great! I see these two *all the time* but was never quite sure how to put it into words. They're almost always accompanied by the explanation: "some people want it."
Aaronaught
I'd add that the use case needs to state the actor's goal and name the scenario specifically. Also variations should be kept as separate items otherwise the flow of a single use case gets complicated quickly.
Kelly French
+40  A: 

Expecting something to be infinitely "extensible" or "customizable" is usually a bad one. In these cases you usually end up with an overengineered POS that no one understands and that can't do what it was originally designed to do particularly well and suffers from the inner platform effect.

dsimcha
Ah, I've heard of this, but did not know the official name. Why use a database when you can just roll your own? LOL.
Kaleb Brasee
+1, I saw that a few times. Essentially, it seems like many companies want to clone Microsoft Access: They want to get an application-creation tool and an application built on top of that.
Michael Stum
I'm stuck in that
Pierre-Alain Vigeant
Yeah, me too, all fields are configurable on the screen for all user roles, it works for the moment, but guess who is going to configure all that?
HeDinges
+37  A: 
  1. Requirements that have no corresponding test. Usually this is the result of a requirement that is too vague to be testable, which will cause problems later. I never regard requirements as complete until a set of tests has been written for each requirement (not in code, but in an English-language high-level description of what the test is and the success requirement for passing the test.

  2. Tests that have no corresponding requirement. This often means that there is an implicit requirement that is creeping into the system and will cause problems if there are changes needed later. The difficulty is that the design space will be overly constrained by unnecessary implicit requirements, preventing possible discovery of a more optimal solution.

  3. Use of "should" or "must". This is very common in requirements documents, but it seems pretentious and doesn't age well. The requirement should simply state something about the program's behavior, i.e. "The program reads XML-foo format data from a file" rather than "The program should read XML-foo format data from a file". This is easier to understand, and ages much better - i.e. it makes sense after the program is written. After implementation, the requirements can simply be reviewed to see if they are true of the program as written. The program either satisfies the requirement or it doesn't, no should or must involved. The requirement is then a simple propositional statement that is either true or not - use of "must" and "should" is logically much more complex since it requires something like possible-worlds semantics to represent.

  4. Requirements that are too detailed. This often indicates that someone is thinking at the implementation level rather than the requirement level, and "how" is creeping into the requirements. Requirements should be about "what" not about "how".

  5. Requirements that are hard to understand. This is often a hint that the requirements engineer has been thinking in the design space and is letting functional specs (which tend to be harder to understand) creep in as a requirement. A requirement fulfills a need of the end-user, and as such should usually be understandable by the end user and thus relatively simple (not implying that users are dumb :) to understand by a developer.

  6. Requirements that are overly "demanding." For example "Must be able to import from any data format." This results in a system that is, like another poster wrote, required to be infinitely extendible or customizable. It results in bad software because you end up writing something so powerful it is unusable for the average user.

Larry Watanabe
"2. Use of "should" or "must". <...> The requirement **should** simply state something..." :-)
Pavel Shved
:) Can I wriggle out of this one by saying "should" is ok for a "smell"?
Larry Watanabe
'Should' and 'Must' are specific terms that started out life in RFCs, which are not wholly unlike requirements documents, although they are done with enough specificity to be used to implement a protocol. The words have specific meanings in this context, although they can be misused in requirements documents. However, they are not out of place if used appropriately.
ConcernedOfTunbridgeWells
I feel that using "should" and "must" is a little redundant. The fact that a statement about the program is IN a requirements document, implies that the requirement should be fulfilled. In other words, we can factor out all the "should" and "must" from every statement, and assume that the document starts with an implicit "it is required that .." followed by Requirement 1, 2, etc. I believe that this yields a clearer document, and is also less confusing on review after implementation, as everything in the requirements document should simply now be true of the program as implemented.
Larry Watanabe
+27  A: 

My problem with your examples is that they aren't requirements, per se; they're poor implementation decisions someone made.

A customer may, in fact, have a legitimate business reason to import data from an Excel spreadsheet (e.g., that's how they get data from their supplier).

Requirements should be expressed in terms of business needs. Frequently, people can't help but dive into implementation details when expressing requirements. We geeks do that a lot, but I've seen vice presidents and sales people do that. It's easier to think in terms of the concrete than in terms of the abstract.

However, in my view, when we are acting as requirements analysts, it's our job to take things up a level, away from specific implementation decisions. For example:

  • If a guy from sales says, "I need to be able to stuff an Access database in to this application!", my first question is, "Why Access? Why not Excel? Why not CSV? Is there something specific about Access that fits the problem, or is it just what you're used to using?" His answer will help clarify the analysis.
  • If the CTO says, "This has to be done with .NET", is he insisting on that because he's a Microsoft bigot? Or is he merely misstating the real requirement, which is that it must be a Microsoft-based solution, because his company's core competencies are all based on Microsoft tools?

Senior software architects and designers often must be the people who eliminate what you call "requirements smells" by negotiating different requirements. But sometimes, you have to live with those "smells". The business requirements may, legitimately, be, "My best customer sends me Excel files, and they're not willing to convert to the XML your system wants without charging me more money than I can afford to spend."

Just my two cents' worth.

Brian Clapper
I would agree that they are implementation decisions and not proper requirements; the problem is, they often *arrive as requirements* with no strong justification. Maybe those fit it the "use case with no Actor/Interaction" that S.Lott posted. Sometimes, though, these requirements are OK because the interaction is implied and the justification is obvious, i.e. "Send e-mail alerts when a new ticket is created." Still, good answer.
Aaronaught
*Senior software architects and designers often must be the people who eliminate what you call "requirements smells" by negotiating different requirements.* +1, absolutely. Requirement smells are a consequence of lacking these people or this competence.
molf
+6  A: 

A requirement for high performance or low memory usage, without any statement of metrics for success.

Douglas Leeder
+1  A: 

Implementation Details.

Anything related to performance, maintainability, adaptability or cost.

S.Lott
-1 performance is not an implementation detail.
MatthieuF
Performance is a perfectly acceptable requirement.
Rob
@MatthieuF: Pretty sure the lines were intended to be two separate bullet points. Obviously cost is not an implementation detail either.
Aaronaught
+7  A: 

Tautological Use Cases:

  • Action: I want to be able to post a new entry on my Blog
  • Reason: So that I can post on my Blog.
  • Acceptance: When I can post on my blog.

To me that's not helpful at all.

What I've seen is there is usually a lot more conversation about posting a blog entry than just whats expressed during the story card. When that is not captured your setting yourself up for implementing the wrong thing.

jfar
Just my 2 cents... people who produce such use cases are either having poor working attitude, or lacking of communication skill to express idea. Yet this kind of requirements are everywhere. "It is programmer's job to figure out what is in my brain"
shiouming
+8  A: 

If anyone ever hands you a printed-out requirements document, you might be in trouble.

If it weighs more than a pound, you're probably in trouble. (Or you might work somewhere that thrives on formal processes; I'm told such places exist.)

If you turn to a random page and see a section numbered something.1.1.1.1, you are definitely in trouble.

(Several of the other answers here are better, but these smell tests have the advantage of being extremely quick to apply.)

Jason Orendorff
Smell tests don't have to be overly general - I was hoping this would be a bit like the code smells discussion, so quick and easy is good!
Aaronaught
+2  A: 

Another that I recently remembered:

  • Any requirement specifying that the software must read (not send) e-mail. Also see Zawinski's Law.
Aaronaught
+7  A: 
  • Any time something is described as "good", "quick" or "easy", as in "a good UI". That's completely subjective and you're unlikely to produce anything that is accepted without first making a project to define what "good" is.

  • Requirements to integrate with specific third-party software that is not already in use within the company and has been selected by non-technical users. The stuff you're integrating with is invariably dreck that doesn't work well "but we're already locked in".

  • "We have to be able to customize this" without specifying exactly what about it to customize. Another variant is "we need to be able to support any marketing promotions that come up".

  • Any request to redesign the signup process that isn't based on user testing. There is no perfect signup process - it's always a compromise between getting more information (that CS and marketing can use) vs making it simpler for the user. If you break it up into bite size bits then people complain that it's too long. All in one step makes a single page with an intimidating form. If you're not making your changes based on testing then it becomes a political football and you're going to keep redoing it all the time.

  • Any requirement that implies a dependency on another group who is not invested in the project. If they don't have skin in the game you're likely to get back-burnered and finish late if ever.

  • Requirements to integrate with third party software when contracts are still being negotiated. You may never get your contract, and if you do your technical contacts are likely to shift in mid-project as you go from being a prospective customer to one with a signed contract.

Basically, these fall into three categories:

  1. Vague requirements that need to be clarified. If you realize it's happening, you can work to make them more specific.
  2. Implementation decisions being made by the wrong people. This can be more of a political problem. Hopefully you can negotiate the requirements somehow.
  3. Implementing before you're ready. Either you're being over-eager or the project timeline is dangerously short.
edebill
A: 

Just an example. The worst (or most funny, depends how you view it) requirement I have seen so far was like:

"The application shall be ported from the Open Look user interface."

The story behind this was that we had an Open Look-based application which product management wanted to be ported to Motif. This already had been estimated and the huge costs had killed the requirement already several times. Nevertheless the product manager wanted it and he phrased in a way that did not immediatelly kill the requirement (quite irrational...)

Upon reading, we complained, asking to write the requirement in a way stating where to the application shall be ported to.

The next revision of the requirement read " The application shall be ported to a non-Open Look user interface."

Eventually the requirement got killed again.

Bernd
+4  A: 

Any user-interface requirement that provides more detail than a rough wire-frame sketch is likely to have the stench of the soon-to-be-dead.

A few examples...

I worked for a Mac software startup in 1987-8 where neither the product managers nor the initial client were Mac-heads. They deliberately excluded the technical staff from their meetings, where they spent a lot of time negotiating specifications that demanded we re-jigger the location of buttons in OS-standard file open dialogs, etc. The requirements spec they finally came up with was so heavy on UI details, and so light on functionality, that we could have met it with a system that didn't do a damn thing that was any use at all. The company folded prior to shipping.

One company I worked for had UI specs that required custom list cell widgets - that was fine - but the precise leading between lines of text was specified in pixels. "What if the user changes fonts?" "What do you mean?" "You specified 2 pixels leading here for a 10 point font, and if the user switches to an 18 point font, they need more leading. So we need a math formula based on the font size." "Oh, the designer is an artist who likes Photoshop and doesn't do math." We didn't ship that, either.

At the same employer, we had a partner that was supplying UI requirement designs, again as Photoshop renderings. In fact, they'd supply a new several-hundred page design document every couple of weeks, with lots of beautiful color drawings and workflows that completely contradicted what they'd sent us last time. We couldn't implement and re-implement and re-re-implement the user interface as fast as their caffeine-fueled artists could do mock-ups, so they eventually pulled the plug on the project.

I'm all for beautiful, polished user interfaces with things specified to the pixel level... but that's what you do during alpha or beta, not in your initial requirements documents.

Bob Murphy
+1  A: 

Requirements for system reports where the metrics can not be clearly described usually become time sinks. This is often seen on the work list as vague requirements like "Build a set of sales reports" or "System should output volume and inventory change reports" that sound easy but require more business domain knowledge than the developer is likely to have.

It often takes weeks to pull a useful story out of a vague description.

These report requests sound seem simple at first but are almost always problems as the users make so many contradictory assumptions on what a "report" is and what it means to run a report.

Any requirements for system use reports usually have a mild stench to them. I have seen systems where the logging of user info exceeded the complexity of the tasks performed by the system. One system tracked the cumulative time of every SQL query by user to display "DB Time" in the admin console.

sal
+2  A: 

The absolute worst requirement I've ever come across was the requirement to pass all and any tests specified by the client. And the client may specify additional tests if necessary before accepting delivery. No kidding, that was the actual legal requirement.

What happened next was the system (software+hardware) was developed in a little over a year and it took us two extra years to pass all the tests.

slebetman
pass testes specified by the client? Ewww, gross!
FastAl
@FastAl: The problem was not passing the tests. The problem was for each test we pass the client kept writing new ones for us to go through - more than a year after the product was developed to spec. In short, it was a backdoor to feature creep enforced by legal requirements of the contract.
slebetman
@FastAl: Ah sorry, didn't spot the typo even with your help. Must be getting blind... fixed.
slebetman
+15  A: 

Any requirement that does not pass the 'SMART' acronym test:

  • S pecific
  • M easurable
  • A ttainable
  • R ealistic
  • T raceable

See 'Mike Mannion, Barry Keepence. "SMART Requirements"', at http://www.win.tue.nl/~wstomv/edu/2ip30/references/ that links to a PDF.

the empirical programmer
Is "Attainable" any different from "Realistic", or are they both just there to complete the acronym?
Paul Stephenson
Argh, brings me nightmares back from school
Ikke
acronymitis.. ugh
Marcus Lindblom
I used A for Achievable
pramodc84
+5  A: 

Technical solutions presented as requirements:

Requirements should describe the outcome for the business, not how it should be implemented, that's the bit we are good at.

Tony Edgecombe
+3  A: 

As a sub category i would add Comparison or Metaphor smells. When the client is asked how they want something to work they cite another system they have seen as the model, or even two other systems that must be emulated on the target system. It indicates that they have no real idea how that part of the their business works and hope you can find out enough about it the retrofit some else's (possible bad) design onto it.

One example was when building a booking engine, to sell tickets to an attraction we were asked to make it work like a site selling handbags and a site selling airline tickets. No indications of which parts from each just up to us. Needless to say none of the clients business model matched either of the other companies.

PurplePilot
+2  A: 

A few that I don't see yet:

The requirements actually specifying how the system should be implemented. I've seen this such that the internal storage mechanisms (and formats) are specified in the requirements doc even though they never escape the internals of the system.

The specification has chunks of pseudo code (with loops and control structures in it) which do not reflect the implementation that is actually going to be required (e.g. VB style spec, massively multi-threaded and even distributed implementation). This can really hurt when v2 of the spec makes a simple tweak to the psuedo code that simply cannot be done without destroying the current model. Beware code in specs, and if it is there then make very sure that you explain the implementation strategy to the author and try to get the spec changed to reflect what you plan on building. This will quickly show if your planned implementation is going to fail on v2 as the user will spot future flaws in your implementation design.

Similar styles of functionality or control that are specified in very different ways for no obvious reasons. At this point you are building yourself a maintenance nightmare since the system will not be intuitive to anyone other than the author of the spec. Again this is where you need tech involved early enough that these kind of things can be nipped in the bud before the whole spec is full of them.

My favourite line in a spec so far has to have been:

Will have standard Windows help

That was genius, short but serious impact. Wrote the app for a single user (who had a user manual too) and had to put full context sensitive help all over it, which I doubt anyone other than the tester ever even saw (probably took 1/4 of the dev time to do it too - Win3.1, those were the days). I'll bet the user didn't even know that line was in the spec.

DaveC
A: 
  • Any requirement about performance, storage, memory, response time, availability or any other non-functional spec without the metrics to be evaluated against.
  • Requirements about customization of business logic without programming ("let's use some fancy framework, so the user can be able to edit an XML file to modify the system behavior if foo changes...").
  • Everything that says "automatically" but seems more like "the system auto-magically will do this..."
  • Too detailed specs, that looks like pseudo-code, usually written by people who had programmed early versions of the system -or similar ones- and think about programs rather than domain problems and concepts.
JuanZe
+1  A: 

Requirements for a system that don't specify what to do when things go wrong (or even acknowledge that things can go wrong).

Caleb Huitt - cjhuitt
A: 

Requirements that specify a certain functionality, but don't explain (or even seem to consider) how that functionality interacts with the rest of the system.

Caleb Huitt - cjhuitt
+2  A: 

Vague overly generic requirements for the system to be 'future proof'.

If I had a crystal ball, I wouldn't be working for a living.

Phil
+1  A: 

Listing all the possible functionalities of the product or copying the functionalities of your competitor in order to guess what func your product should have

iChaib
+1  A: 

I've encountered this line in requirements document:

  • Support for SQL server, Oracle, MySQL, PostgreSQL and Informix.

When I asked them which databases they actually use, they didn't know the answer.

Analyzing the rest of the document it became obvious that they merged all possible features from all similar applications.

Hugo Riley
+1  A: 

"Smell this."

<sniffs>

"I smell nothing"

"What you do not smell is the lack of any requirements."

Epaga
Double negatives are a communication smell!
Aaronaught
double negatives just aren't what they used to not be.
Epaga
A: 

Anything that sounds even vaguely like "The New System should do everything the Old System does", where what the old system does is not defined.

also the word "functionality"

nailitdown
+3  A: 

The requirement that, even though there exists no capability in-house to develop the software, the software should be developed in such a way that allows it to be maintained in-house, to save money.

"once it's done we want to just be able to tweak it"

nailitdown
+3  A: 

Low-value features that will yield a poor return on investment, like automating a process that only takes someone ten hours a year to do manually.*

Such requirements have myriad problems, but the biggest one is that no one cares enough to specify them in detail.

If you see a lot of these it's often a sign that a project is done for. (And if you see a lot of these across multiple projects, it becomes a workplace smell.)

* Of course, the value of a feature is sometimes difficult to measure - sometimes you need a feature for marketing purposes even if no one is going to use it, and sometimes you need to automate a process because the person that does it by hand will quit if you don't.

Jeff Sternal
+1  A: 

Simple

When somebody wants something they consider 'simple', it almost always means the opposite.

Jan Limpens
I've heard that before!
Moshe