views:

150

answers:

3

I'm coming from a big design background, and just learning TDD/BDD so bear with me if this is a simple question. It seems that many client decisions aren't actually recorded anywhere - they're just documented in the code and tests. So my question is: what happens when the client changes some of these undocumented decisions? How do you know if something is a bug or a change to the original functionality?

+2  A: 

I don't understand the premise of "they're just documented in the ... tests." If you mean acceptance level tests, then those should be things that the client can read (and ideally write - that depends on the framework you can use) and agree defines the functionality.

That being said, it has to be understood that the fundamentals of the agile methodologies involve collaboration and mutual trust, so attempting to document things so that you know if the client changed their mind or if it was a bug is decidedly not a goal of the methodology. The attitude should be "it doesn't matter, either way we need to schedule time to change it." Obviously there are some questions about how you arrange billing in such a situation if the software is being developed by a consulting firm or similar arrangements, but the foundation is that collaborative attitude. If you don't have it, then time spent in documenting design and getting sign-off may be a necessary evil.

Yishai
+1  A: 

TDD/BDD are usually PART of the entire process - in most agile approaches you also have user stories (stored on index cards or some web tool), which would hold the original requirement. If you're working with external clients you will still need to formalize the original project request and any approved change request OTHERWISE you will end up with a finger-pointing blame game of who asked for what additionally or who made what decision to remove some functionality. Agile is great for internal teams or when you and the client have an exceptionally good relationship. The nice thing about an agile iterative approach is that (if you follow a 2 week iteration) they are basically signing off every two weeks on what is being delivered to them. If you need to rely on test cases or code to document user needs when the user is asking why something was not delivered or not delivered as expected - you'll find yourself in a corner - legally and reputation wise.

meade
+1  A: 

Is there really a problem with documenting the requirements in the code and tests?

From my limited understanding, in a TDD environment, the tests and code, along with any in-code comments, are the central source of documentation. There might be some versioned documents (docs, spreadsheets, non-code text files), but the tests are written to satisfy some requirement and it is document what requirement the test is validating and then the code is written to pass the test. If anything "funky" happens in the code, that's further explained by an in-code and/or in-test comment.

However, I do feel that there must be a mapping between tests and/or code and the requirement. If you aren't doing that, then there's probably a problem, or at least a greater potential for a problem.

Thomas Owens
+1: "I do feel that there must be a mapping between tests and/or code and the requirement"
Alex Baranosky