views:

154

answers:

6

I've just finished the implementation of my software system and now I have to document whether it has satisfied its requirements. What sort of information should I include and how should I lay it out?

My initial functional and non-functional requirements was in a two-column table and looked something like this:

  • FN-01 The system should allow users to send private messages to each other.
  • NFN-03 The setup/configuration form should contain sensible default values for most fields.
A: 

List, one by one, the requirements numbers with the requirements line, then text and/or screenshots proving it does so.

Have the requirement number on the left in bold, then have the requirement text tabbed in and italicized. Align the proof text/screenshots with the requirement text, leaving the left column clear for just the requirement numbers. EG:

REQ-1      italicized requirement text

           text discussing how the software has
           fulfilled the requirements, possibly
           with a picture:

           -----------------------
           |                     |
           |                     |
           |                     |
           |                     |
           |                     |
           -----------------------

REQ-2      italicized requirement text

           etc...

You should group into chapters or sections based upon logical program areas, and start the section or chapter with a blurb about how the whole program area meets the requirements (be general

Luke Schafer
A: 

I would keep it simple and add the following columns:

  • Delivery Satisfied requirement - with a drop down list containing Yes, No, Open
  • Comment - any comment regarding the delivery, such as 'need to define message size', 'Does not fully satisfy in the layout of the message, but accepted by the client', etc.
  • Date completed - when the change was delivered
  • Date satisfied - when the change was accepted

With the use of requirement ID's, I'm assuming they point back to the docs containing more detailed info including layouts, screen shots, etc.

meade
+1  A: 

I would use the requirement numbering scheme already in place rather than creating a new one. I would document the following items for each requirement:

  1. Requirement Status: This can be phrased in many different ways but you are tyring to communicate if the requirement was completed as listed, completed in a modified variant of what was listed or was simply not able to be completed at all.
  2. Requirement Comment: Describes the previously listed requirement status. This is the "why" that will explain those items that were not able to fully meet the requirements.
  3. Date completed: This is mostly for future product planning but also servers as a historical reference.

A couple of other points to remember:

  1. Requirements may be reviewed by the customer, especially if the customer was the source of the requirements. Hence, this document needs to be as accurate and as informative as possible. (It's also another reason you don't change the requirement numbering scheme unless you have to.)
  2. Your testing department (assuming you have one) should be using these documents for their test planning and they need to know what requirments were met, which ones weren't and most importantly which ones changed and how.

Lastly, unless you're putting on a dog and pony show for someone you shouldn't need screenshots as part of requirement documentation. You also shouldn't need to provide "proof" of completion. The testing department will do that for you.

A: 

We would normally have a test plan in place in which each item can be ticked-off if satisfactory. The plan would be based on the original requirements (functional or non-functional) for example:

Requirement: The users account should be locked after three attempts to login with an incorrect password.

Test: Attempt to login more than three times with an incorrect password. Is the user account now locked?

We would do this for each requirement and re-run the plans for each Release Candidate. Some of the tests are automated but we do have the luxuary of a test team to perform manual testing as well!

Based on the results of running these test plans and User Acceptance Testing we would sign-off the RC as correct and fit for release.

Note that sometimes we will sign-off for release even if some items in the test plan do not pass, it all depends on the nature of the items!

Simon
A: 

The formal way to validate requirements is with testing - usually acceptance testing.

The idea is: for every requirement, there should be one or more tests that validate the requirement. In a formal development situation, the customer would sign off on the acceptance tests at the same time they sign off on the requirements.

Then, when the product is complete, you present the results of the acceptance tests and the customer reviews them before accepting the final product.

If you have requirements that cannot be tested, then they probably are badly written.

e.g. don't say "loading files shall be fast", say "an file of size X shall be loaded in not more than Y milliseconds on hardware of Z" or something like that.

Michael J
+1  A: 

Hi,

there are some techniques to convert your requirements into test cases. But those depend on how your requirements are documented. If you already have made a scenario based requirements analysis then it would be very easy: Just create a sequence diagram for every path of your scenario, write/do a test -> done. Besides the documentation created that way should also impress your lecturer.

If you don't have scenarios, you should create some out of your use cases.

The downside here is that it is very work intensive and should only be used in cases that justify its use (a thesis for example ;))

Butze

related questions