views:

340

answers:

8

Say we've got requirements for all the functionality we want to build. Each requirement is listed in a Word doc with an associated ID. For instance, Requirement 123 is "each refund, if processed the same business day as the last payment, should be processed as a cancellation and will incur no refund processing fee".

What I'd like is for the code itself to be marked up with requirement ID somehow, so that each code unit has a corresponding requirement it's built to support, and we can tell at a glance if all the requirements have been implemented. Automated testing can even associate a test with each requirement and check off a list of them if unit testing has passed.

It would open up a lot of automation opportunities, tying our issue tracking, documentation and actual code together.

So what's wrong with this idea? If it's a good one, are code attributes or XML comments the best way to implement it? Has someone else already made a commercial or FOSS product that does this? Any other thoughts?

EDIT: The mapping need not be one-to-one. I would be fine with multicasting attributes, so that each code unit is decorated with multiple Requirement attributes, and each Requirement attribute could be found on multiple code units.

+1  A: 

In .NET, you could use attributes

[Requirement("123")]
[Requirement("456")]
ProcessRefund()
{
...

then you can do lots of interesting things, like generating reports based on your .doc file by reflecting over your assembly.

Ariel
I was thinking [Requirement(id="123", document="\\server\docs\refund.docx")]
Chris McCall
That's another possibility, but I would keep code and docs properties as loosely coupled as possible for ex. by keeping just a GUID for the requirement ID, then have a mapping from GUID's to docs etc.
Ariel
A: 

Pi SHURLOK provide a tool called PixRef that was developed for this purpose. It performs requirements matching across documentation and code. You don't need to specify cross-links, just the requirement tag, then point the tool at the requirements document and all other files to be checked; the tool does the rest. Their Concepts page and Getting Started guide give more detail and you can download an unrestricted demo.

PixRef uses tags in documentation and code to allow cross-referencing to be detected. An advantage of this approach is that no framework development is needed to support requirements traceability, which would be needed if attributes were used. Also, requirements can be tagged anywhere, including in Word documents, code files, and XML.

Disclosure: I am an employee of Pi SHURLOK, though I have not worked directly on this product.

Jeff Yates
Downloading demo now. Waiting on a demo password to begin... TBH, that product looks pretty scary. Seems like it's geared towards huge manufacturing bases...
Chris McCall
We have used this internally for many years. It is quite powerful, but you don't have to use all the features if you don't need to. The user interface is currently in development so feedback, I'm sure, is very welcome if you have any. :)
Jeff Yates
Yikes, that UI is pretty hateful! I'll craft some feedback as soon as I can tell what it is I'm supposed to be doing with the product...
Chris McCall
A: 

Requirements are the heart and soul of a systems implementation project, however tracing them in the actual code is unmanageable. The reason is that quite often a requirement spans multiple areas of code and very often a requirement is fulfilled outside of code (e.g. graphics, css, network infrastructure, etc.).

What I would do is simply trace the requirements using a requirements traceability matrix against the Software Design Specification, Delivered Code (the developers use the matrix as a checklist), the UAT (once again they use the RTM as a checklist) and into Implementation in the training.

Nissan Fan
A: 

Is it obvious that requriements will always map to localised pieces of code? Even your example dependends upon various concepts "same day, what timezone?" "cancellation" "not incurring a reprocessing fee" which may well spread of into other functions ... set a flag here, has consequence there ...

Now the idea of mapping requirements to tests seems very good to me. I have done things like this in the past and found it helpful. It won't be a simple one-one mapping, your example will need several tests to show that all corner cases are handled correctly, but marking those with comments (or in Java even annotations) seems to me to have mileage.

djna
"Is it obvious that requriements will always map to localised pieces of code?"Perhaps not, but all code should be written to fulfill a requirement, shouldn't it?
Chris McCall
I'm expecting that it comes out as a quite a large many-many mapping. In you example, certainl broad concepts such as "reprocessing fee" are supported may much other for, say "fees in general", "associating fees with invoices", "persisteing fees", "persisting anythign". You end up marking much code as being associated with any one requirement, and many requriements being assiated with any one piece of code. Go back to the purpose of doing this. "we can tell if all the requirements have been implemented" By just annotating the tests you can achieve this.
djna
A: 

I would say that the first issue is that multiple modules/classes are often needed to implement even simply worded requirements. In your example, there might be account classes, transaction classes, payment schedule classes, etc. that all have to work together to achieve the stated requirement.

Conversely, one module will usually be used to satisfy multiple business requirements, so it would need to be marked with every associated requirement. For example, your Account class might be associated with 75% of your requirements. It's not very helpful to have a huge list of requirement IDs stuck at the top or bottom of your Account class IMHO.

I think using unit tests to exercise your requirements, with each test associated with a requirement ID would be a better and easier to manage concept.

Bytenik
+3  A: 

Simple answer: NO.

Complex answer: Maybe

If you have simple and straightforward requirements, how about using JUnit and annotations? This works on the lowest level of the code.

For anything more complicated than a single method, you should use a truth table.

The Source of the Disconnect

Let's back up a bit. One of the measures of what makes a good requirement is that it is testable. This means that theoretically it should be possible to link code, maybe through a JUnit test, to a given requirement. The challenge to be faced, assuming all your requirements are testable, is one of granularity. Requirement gathering has been moving towards use cases or user stories to capture and organize requirements. Most use cases involve describing the steps that must be taken, along with any assumptions involved, to achieve a specified outcome. It is usually pretty simple to turn a requirement into a test case; if the requirement is that the system produce a result of abc when fed xyz, the test is "Input xyz, successful result: abc". When you go to write your JUnit, the process involved may take so many components that to test it requires so much setup and configuration that it takes more time to keep fixing the test than to keep writing application code. Unit tests are meant to test a very narrow range of interactions and to attempt to combine the scenarios results in very fragile and time-consuming tests.

Also, when you consider how code is usually developed as a set of layers to handle similiar responsibilites, User Interface or database for example, they are coded in the general sense and not just for a single use case. If we tried this with the ever popular ATM example, we could use the use case "User logs into system". The variations alone create a large number of paths through the code; types of users, login validation errors, user interface options, and so forth. If we code the database layer, we don't code

getCredentialsForLogin(User u)

it would look more like

getCredentials(User u)

so that it could be used in multiple circumstances, like balance transfers or session timeouts.

Better yet, we would write

getAccountStatus(User u)

not

canUserLogin(User u)

This means that the code in our layers will probably satisfy a large number of requirements. You could try to list them all but that creates an accounting-type headache. If you only listed one requirement you risk not handling a requirement and hoping that your QA group is thorough.

Trying to create JUnit tests in this manner will likely push the framework beyond its design limits.

The Requirement-based Approach

You will be better off investing that time into a more detailed test plan which covers all the important variations with a focus on examples. The most effective I've found is when you setup the entire transaction and any assumptions and provide the expected results. I've found truth tables to be invaluable in quickly creating the complete list of valid scenarios.

You do it by listing the different variables (specified as boolean conditions) being tested together so you can see all the combinations. If our variables were valid user, comm link down, valid PIN, and account locked, we would create a table with 16 columns representing the unique combination of each variable.

Condition / #    | 01| 02| 03| 04| 05| 06| 07| 08| 09| 10| 11| 12| 13| 14| 15| 16|
==================================================================================
Valid User       | T   T   T   T   T   T   T   T   F   F   F   F   F   F   F   F |      
Commlink up      | T   T   T   T   F   F   F   F   T   T   T   T   F   F   F   F |
Valid PIN        | T   T   F   F   T   T   F   F   T   T   F   F   T   T   F   F |
Account unlocked | T   F   T   F   T   F   T   F   T   F   T   F   T   F   T   F |
================================================================================

Then you list out your 16 distinct scenarios:

01) Valid user, commlink up, valid PIN, account unlocked
02) Valid user, commlink up, valid PIN, account locked
03) Valid user, commlink up, invalid PIN, account unlocked
04) Valid user, commlink up, invalid PIN, account locked
05) Valid user, commlink down, valid PIN, account unlocked
06) Valid user, commlink down, valid PIN, account locked
07) Valid user, commlink down, invalid PIN, account unlocked
08) Valid user, commlink down, invalid PIN, account locked
09) Invalid user, commlink up, valid PIN, account unlocked
10) Invalid user, commlink up, valid PIN, account locked
11) Invalid user, commlink up, invalid PIN, account unlocked
12) Invalid user, commlink up, invalid PIN, account locked
13) Invalid user, commlink down, valid PIN, account unlocked
14) Invalid user, commlink down, valid PIN, account locked
15) Invalid user, commlink down, invalid PIN, account unlocked
16) Invalid user, commlink down, invalid PIN, account locked

Now you have a list of all the possible combinations of those conditions. If any of them are duplicated or impossible because of other business rules, you just document the reason and then they can safely be skipped. For example it might not matter whether the account if locked if the commlink is down. The order of the conditions can be rearranged to facilitate these types of business rule relationships.

Truth tables were the secret behind how we made sure a last-minute, over-the-holidays, huge business opportunity could be implemented in a matter of weeks even though it completely blew major assumptions in almost every module of the existing system.

Kelly French
A: 

I would recommend using automated integration tests to state your requirements.

Then you have your code, which has coverage achieved by normal automated unit tests, JUnit for Java code and NUnit or XUnit for .NET.

Above this layer, you have your integration tests (created with a framework like FitNesse)

You end up with code that can be plugged into a normal continuous integration testing for chekcing that no unit test break, and your integration testing can run overnight to ensure that all functionality use cases still pass.

Your integration tests then also become your documentation, if you put some thought into them and ensure that they are declarative and clear :-)

Joon
+1  A: 

As stated by Nissan Fan above, tracking requirements throughout your code is next to unmanageable. I've seen it been tried, but ultimately as team members come and go, requirements constantly change etc, you will never be sure whether your tags are still accurate. You will end up with having spent a lot of time gaining you mostly unreliable reports, and probably abandon this system anyway at that point.

In my experience, except maybe for the initial rampup period of your project, requirement tracking quickly becomes a matter of tracking the changes in your code, rather than statically linking everything that's currently present to some requirement.

A much better investment is therefore a good issue tracking system, integrated with your source control system. In the .Net world, TFS provides this out of the box. Also with Subversion (and probably a lot of other source control systems) there are numerous (both free and commercial) offerings that give you just that (Trac, BugZilla, Gemini, ...)

jeroenh
I agree, I can't imagine maintaining this information accurately. It's fundamenetally uncheckable.
djna