views:

998

answers:

5

I am currently examining the benefits and costs of introducing requirements traceability into the development process where I work. I can see the potential benefits to the stakeholders of traceability, but I'm unclear about how you can go about implementing the logistics to trace requirements back from code.

  • Do you need to use special tools, like RequisitePro or Kintana?
  • Can traceability be implemented just through documentation or design artifacts?
  • Should each version control change identify the requirement(s) being addressed?
  • Should each module/class/function identify the requirements it fulfills? If so, then how?

I would like to keep this questiom objective and practical, and avoid philosophical rants or product sales pitches. I'm interested in the practical day-to-day steps that developers/team leads need to do to make requirements traceability possible.

I would be happy to look at any links or references people are aware of, or hear about how you deal with traceability in your organization. I would also be curious about how beneficial you find traceability has been for you, if you have experience with it.

+1  A: 

We tried RequisitePro with one customer. No one like analyzing and organizing the requirements. It was too much "overhead" like work. Analysts didn't do it, and we programmers didn't really have much access to it.

Traceability through internal documentation was how we did it when I worked on military software projects. Structured documentation.

Nowadays, I suggest you use nice document production tools like Doxygen, JavaDoc, Sphinx, and the like to gather traceability right out of the code.

Version control has little to do with requirements, per se. It has everything to do with releases and code. Releases may map to requirements or product backlog or sprints. There's too many twists and turns along that road to be fussy about version control and requirements.

"Should each module/class/function identify the requirements it fulfills? If so, then how?"

Comments that can be parsed by a tool. Doxygen, JavaDoc, Sphinx, etc. One lesson learned from the Python community is not to use too much HTML or XML in the comments. Use ReStructuredText or Markdown or some other lightweight, plain-text markup language.

With RST, you can define your own directives or interpreted text roles specifically for requirements management.

Here's the big issue.

If you've got a waterfall project, where lots of hands have touched the requirements between product owner (or worse, end-user) and programmer, then the requirements are largely meaningless technical details. A lot of technical details.

Each detail needs a unique ID. These will be complex. Oh well, you did waterfall, you embraced complexity.

If you're using a more Agile method, you've got a backlog of relatively simple user stories that go from product owner more-or-less directly to development team. You need to identify the story, but that's easy. They have names (or perhaps numbers). A story permeates several aspects of the software being developed. This is easy. And not very complex.

S.Lott
How would you use a document production tool to gather requirement traceability data? Would developers tag code with some kind of number? Would there be annotations that need to be matched against a database? **A concrete example would help here.**
LBushkin
@LBushkin: Yes and Yes. There's no example, it's just the obvious comment that says `/** requirements: XYZZY, FOO and PLUGH */` or something to that effect. Try not to make make it harder than it already is, otherwise no one does it.
S.Lott
+2  A: 

I'd strongly recommend using a traceability tool. I worked years ago on a project which had to have complete traceability of code and tests to requirements, and requirements from low to high levels. Eventually, you wind up with a database of requirements and traces between items. IIRC, we used spreadsheets to manage this information, and it became tedious. I haven't used any of the tools myself, other than playing with demos of them, but believe that other than creating my own requirements database, the use of a tool would be the most practical way to do this tracing.

This project used the waterfall methods, so that's one big consideration--

Was it beneficial? Probably, but it was required.

One big benefit was that it made the creation of test cases a lot easier: All requirements had to be tested, all code had to trace to a requirement. All low-level requirements have to trace to higher-level requirements.

One big drawback was the amount of drudgework involved. For example, validation of each parameter winds up being a different set of low-level requirements. Essentially, you write the code once in requirementese, once in code, then finally in tests.

One feature to look for in tools is the ability to handle versioning of requirements. This can tie your traceability in knots if not managed.

Frank Ames
You mention that you'd recommend using a traceability tool. Any in particular?
LBushkin
I liked what I saw in RequisitePro, although I haven't used it "for real". It seems to address a lot of the shortcomings in the homebrew system we used. Have not tried any of the others.
Frank Ames
+1  A: 

G'day,

One of the best implementations that I've seen for this consisted of specially formatted comments in the code referring back to the requirement numbers in the requirements doc.

This seemed to work because

  1. the system was a new implementation of an existing system that was heavily documented (requirements capture phase was around 18 months long), and
  2. the requirements were heavily controlled to ensure that there was no implementation aspects contained within the requirements themselves.

Here's my answer to a separate question "How Much Designing Should Go On Before Any Coding Takes Place?" that has more details about the system.

Edit: As requested in the comments here's a couple of thoughts about extending this to DB's and UML diagrams.

If you're starting to try and associate aspects of DB design with requirements then that smells a bit like you're capturing implementation decisions as requirements. Remember that requirements should always be decoupled from their implementation.

The same could be said for trying to couple UML with requirements. Though these could be indirectly coupled if some requirements are captured as user stories, or as use cases which are basically user stories with a lot of compulsory, associated baggage IMHO. (-:

Either way, trying to associate system requirements with design, whether DB or UML, raises alarm bells for me that you are mistaking implementation decisions for requirements. Have a look at the excellent book "Writing Better Requirements".

HTH

Rob Wells
Do you have any ideas on how to match design artifacts (UML, DB models, etc) to requirements?
LBushkin
+2  A: 

Do you need to use special tools, like RequisitePro or Kintana?

Technically unnecessary, but can make things much easier. My company used a proprietary internal tool for taking a requirements document in MS Word format, and it spit out various workflow-related items like test case templates for the test group and tasks for the project schedule.

Should each version control change identify the requirement(s) being addressed?

Yes, you should be able to trace every code check-in to a requirement. That's a main goal of traceability--to make sure all changes are for a reason that has been considered worthy by the project stakeholders.

We used a source control system that required you to associate a check-in with a bug report. So naturally every major requirement has a report in the bug tracker, and all code checked in would somehow relate to these.

But we were lax about it. It was OK for us to trust that each developer was doing work associated with a requirement and therefore we didn't really care whether a developer's feature in the bug database was linked back to the major requirement. But I think the key is that all code was directly traceable back to a report in the bug database, and with some manual effort you could figure out what requirement it was related to if necessary.

Should each module/class/function identify the requirements it fulfills? If so, then how?

I don't see why, and I've never done that. Specifically I've always been involved in code that carries over for years and years to new projects, and it'd seem like we'd have a lot of useless info embedded in each module. For example, let's say we were using a log library written 10 years ago. Does anyone care that 10 years ago the library met the requirement that logs were stored on a central server and 5 years ago there was a requirement to add filtering capabilities, and then oh yeah, 2 years ago there was a requirement that logs no longer get stored on a central server because that product reached end of life. And so on.

The bug tracker took care of all code traceability we ever needed.

I would also be curious about how beneficial you find traceability has been for you, if you have experience with it.

Was it useful? Yes, we had very high quality products and we released them on time. We could easily attribute that to knowing all the major work beforehand and making sure new work went through a scope-change process to be added to the project. One drawback was that if a developer found out that they didn't account for something in the design phase, they tended to keep quiet about it and just do the work without telling anyone because it was a major pain to go through the whole process once you've started implementing things.

Our design model was waterfall. I think traceability could work fine in a more agile setting, though, and we were attempting to migrate in that direction but our proprietary tools were designed around our workflow and not easy to change.

I'd rather not recommend specific tools because honestly I bet there are better ones out there.

indiv
+2  A: 

Do you need to use special tools, like RequisitePro or Kintana?

No but anything that gives you a hierarchical view, rooted at requirement under review, of a network of requirements (with status history, requirement history, see-also links, and notes attached to sets of requirements) and makes updating status and editing requirements transactional, on a big project is helpful.

Can traceability be implemented just through documentation or design artifacts?

Potentially. Something is traceable if it is determined to be such in the view of the sponsor and persons conducting any audits.

Should each version control change identify the requirement(s) being addressed?

Code check-in comments should contain:

  • Identifier of requirements communications / documents containing requirements related to checkin
  • Identifier of specific points being addressed in requirements (if the document has several points)
  • Narrative explaining whether going towards or away from meeting the requirement points (if not simply "FOLLOWING REQUIREMENT")
  • Programmer-perspective description of action taken

eg: "R129 #1-12,15: Added main controller"

(...meaning a reference to for example an ProjectX-Requirements-R129.doc or Requirements-1.doc revision 129 points 1-12 and 15.)

This is the ideal form but in practice lots of mistakes will be made. The team lead needs to check whether the comments are right and encourage accuracy.

Having one ref to identify a requirements communication (eg document version) and a single flat decimal numbering system starting from one and unique within a document helps enormously with conciseness and ease when discussing and specifying multiple requirement points at once.

If you are being really formal a team admin needs to enter corrected check-in comments in a more structured form into another system. So in a very formal setup the check-in comments are like "draft 1" or "day-to-day version" of the traceability but another data-entry system will form the master management version.

The checkin can have just a change request number in place of the requirements reference, if the change request itself includes that requirements tracing information.

Having said all that generally speaking I've found entering detailed requirements tracing data in source code control systems to be a waste of time. Unless your shop is centred on traceability above all else, using umbrella change request IDs to represent building activity for the big areas of required functionality, side by side with regular change request IDs for bug fixes, and having the source code system reject check-ins without valid change request IDs is usually the best way to go.

Should each module/class/function identify the requirements it fulfills? If so, then how?

Only if the management data this generates is expected to be used. Formal mapping should not be done in the source code itself (ie not in source code comments) as numbering systems, cetegorisation and overall structure may render comments misleading when they get out of date. The mapping can be maintained in a separate text file stored within the source code control system. This mapping should list the revisions at which a given file or element within a file became associated or dissociated with a given requirement point.

martinr