views:

61

answers:

6

I see that there are many systems out there for requirements to test cases traceability and I started to ask myself what the relationship is between these two artefacts. For example, why have the notion of test cases as opposed to just calling them detailed requirements? Are test cases in fact a refinement of the requirements set? If test cases are not requirements and require more than the documented requirements (e.g. testing more errors, etc) then surely the requirements set is incomplete? Are requirements just abstract test cases?

A: 

Test cases are not refined requirements. Of course they can require more than the documented requirements, because the requirements are the basis for your design, which is needed for your test cases. This is not a problem of the requirement set being incomplete.

Furthermore, requirements may also included non-functional requirements, for which you may not be able to provide test cases at all.

For me, the aspect of traceability from requirements to test cases is a way to ensure that all functional requirements are accounted for and everything works in the right way. If you lose traceability somewhere along the way, this means that you have some requirement, for which you may not be able to tell where in the design it is satisfied. And if you can tell where in the design your requirement is accounted for, but you have no test case traced to that, then it simply means you've missed out on testing this requirement.

So in summary, this kind of traceability ensures your test cases cover all of your functional requirements. Nevertheless, you can have more test-cases, as well as more (non-functional) requirements.

Frank
A: 

As I understand it, requirements are somewhat more general than test cases.

A requirement could be for example: The method should not accept numbers outside the range 18-64. Test cases then could be something like:

  1. provide 17 as input
  2. provide 65 as input
  3. provide -1 as input

But largely, this is a matter of common understanding within a dev team...

Thomas

Thomas Weller
+1  A: 

In TDD, test cases are requirements.

But some requirements are not testable or would require huge test facilities.

mouviciel
For my own knowledge, how can you have a requirement that is not testable? How would you know that the requirement has been met? Can you give an example of such a requirement?
logic_cube
An example: "The software shall be written in Ada", only code review can verify that. Another one: "This function shall run in less than 20ms", if for any reason the actual hardware is not available to the project team, the only way to verify it is to compute the WCET from assembly code.
mouviciel
In TDD, tests are written by developers just before they implement the code. They are not what we would traditionally consider as requirements. The developer would still need some other artifact (User Story, Acceptance test, Use case) before they start writing their TDD tests.
Mark Irvine
A: 

Actually it depends on the development process that is used by the team/organization.

  • In Agile (Scrum, XP and variations) processes there usually no requirements and acceptance tests are used to verify that user story is implemented correctly. So the User Story is some sort of requirement/specification.

  • In Test Driven Development tests are the requirements.

  • In Waterfall you usually create a document that lists all requirements for your software and approve it with stakeholders. Then you develop according to these requirements and test your software to accordance with them. As mentioned in the Frank's answer, with this process you need a traceability from a requirement to the test case, as CMMI mandates you to test all your requirements.

ZloiAdun
+1  A: 

In my case,

a Requirement is traced to a set of specifications oriented to fulfill the requirement. Then, a specification is traced to test case meant to verify the specification.

Thus, you can trace back test cases to requirements.

EKI
A: 

I see that there are many systems out there for requirements to test cases traceability and I started to ask myself what the relationship is between these two artefacts. For example, why have the notion of test cases as opposed to just calling them detailed requirements? Are test cases in fact a refinement of the requirements set?

I think the distinction just signifies when they are produced, and for what purpose. Requirements are produced very early before we know a lot of implementation specific details. We try to keep them implementation neutral so they tend to be more abstract.

The purpose of test scripts is somewhat different. Requirements tell developers what the system should do, not how to do it. Test cases however (as they are often written) specify exactly how to do something and they will often reference the actual implementation details.

If test cases are not requirements and require more than the documented requirements (e.g. testing more errors, etc) then surely the requirements set is incomplete?

Yes, there requirements set is incomplete. It always is because you can never completely document all expectation of all users or stakeholder no matter how long you work at it.

But then the test cases are also incomplete. Complete testing is impossible. Any set of tests is a sample set of all potential tests. However, the tests are typically done at a later stage, when we know much more about the requirements, and so they can be more specific, more detailed, and more complete, not not fully complete.

Take a look at: http://www.ibm.com/developerworks/rational/library/04/r-3217/

In this article, the author explains how to get from use cases to test cases. The point the author makes is that while the use case contain all the flow and sub flows, the test cases are the specific data and the specific flow through the system.

Are requirements just abstract test cases?

I would say yes, they can be viewed that way. Some people would even go as far as to not write test cases, and just use the requirements as a 'checklist' on which to base their testing.

Traceability from test cases to requirements is a very nice idea and is a very popular approach. Tools implement this feature because it sells. But there are limitations and traps with this approach. There is often a false sense of completeness when the tool happily reports 100% coverage because you happen to have 1 test for every requirement. It's doesn't really address the problem that some requirements require much more than just one test. It also does not factor in the content of the tests or whether the tests actually cover what they should.

If you are interested in Requirements->Test traceability, you should be aware of the limitations of the approach, and realize that it should be used carefully in combination with other approaches to give a more comprehensive approach to your testing.

Mark Irvine