tags:

views:

402

answers:

4

Hello,

So I've done unit testing some, and have experience writing tests, but I have not fully embraced TDD as a design tool.

My current project is to re-work an existing system that generates serial numbers as part of the companies assembly process. I have an understanding of the current process and workflow due to looking at the existing system. I also have a list of new requirements and how they are going to modify the work flow.

I feel like I'm ready to start writing the program and I've decided to force myself to finally do TDD from the start to end.

But now I have no idea where to start. (I also wonder if I'm cheating the TDD process by already have an idea of the program flow for the user.)

The user flow is really serial and is just a series of steps. As an example, the first step would be:

  • user submits a manufacturing order number and receives a list of serializable part numbers of that orders bill of materials

The next step is started when the user selects one of the part numbers.

So I was thinking I can use this first step as a starting point. I know I want a piece of code that takes a manufacturing order number and returns a list of part numbers.

// This isn't what I'd want my code to end up looking like
// but it is the simplest statement of what I want
IList<string> partNumbers = GetPartNumbersForMfgOrder(string mfgOrder);

Reading Kent Becks example book he talks about picking small tests. This seems like a pretty big black box. Its going to require a mfg order repository and I have to crawl a product structure tree to find all applicable part numbers for this mfg order and I haven't even defined my domain model in code at all.

So on one hand that seems like a crappy start - a very general high level function. On the other hand, I feel like if I start at a lower level I'm really just guessing what I might need and that seems anti-TDD.


As a side note... is this how you'd use stories?

As an assembler I want to get a list of part numbers on a mfg order So that I can pick which one to serialize

To be truthful, an assembler would never say that. All an assembler wants is to finish the operation on mfg order:

As an assembler I want to mark parts with a serial number So that I can finish the operation on the mfg order

+1  A: 

I think you have a good start but don't quite see it that way. The test that is supposed to spawn more tests makes total sense to me as if you think about it, do you know what a Manufacturing Order number or a Part Number is yet? You have to build those possibly which leads to other tests but eventually you'll get down to the itty bitty tests I believe.

Here's a story that may require a bit of breaking down:

  • As a User I want to submit a manufacturing order number and receive a list of serializable part numbers of that orders bill of materials

I think the key is to break things down over and over again into tiny pieces that make it is to build the whole thing. That "Divide and conquer" technique is handy at times. ;)

JB King
You forgot the third part of the story, the benefit. You also have implementation details in there, that don't bring any business benefit (serializable). I would say a better story would be something like "As a User I want to submit a manufacturing order number and a list of part numbers of that orders so that I can send the list to the inventory system".
Eduardo Scoz
Serializable in that context is not implementation, it is a domain term that indicates which parts can bear a serial number, so it is important (as far as I understand the requirements).
Denis Troller
If that's the case, than you're right. Domain expertise is everything.
Eduardo Scoz
Yes, I'm sorry, it has the connotation Denis mentions.
eyston
+1  A: 

Well well, you've hit the exact same wall I did when I tried TDD for the first time :)

Since then, I gave up on it, simply because it makes refactoring too expensive - and I tend to refactor a lot during the initial stage of development.

With those grumpy words out of the way, I find that one of the most overseen and most important aspects of TDD is that it forces you to define your class-interfaces before actually implementing them. That's a very good thing when you need to assemble all your parts into one big product (well, into sub-products ;) ). What you need to do before writing your first tests, is to have your domain model, deployment model and preferably a good chunk of your class-diagrams ready before coding - simply because you need to identify your invariants, min- and max-values etc., before you can test for them. You should be able to identify these on a unit-testing level from your design.

Soo, in my experience (not in the experience of some author who enjoys mapping real world analogies to OO :P ), TDD should go like this:

  1. Create your deployment diagram, from the requirement specification (ofc, nothing is set in stone - ever)
  2. Pick a user story to implement
  3. Create or modify your domain model to include this story
  4. Create or modify your class-diagram to include this story (including various design classes)
  5. Identify test-vectors.
  6. Create the tests based on the interface you made in step 4
  7. Test the tests(!). This is a very important step..
  8. Implement the classes
  9. Test the classes
  10. Go have a beer with your co-workers :)
cwap
You're not doing Test DRIVEN development if you're doing this. You're writing tests, but not deriving your design from the test cases.
Eduardo Scoz
Ye, well, that's in the eyes of the beholder :) The way I see it, you can't derive 100% of your design from test-causes. At least not efficiently - imho. Tests are for implementation details, not for design.. Again, my personal point of view.
cwap
+4  A: 

This is perfectly okay as a starting test. With this you define expected behavior - how it should work. Now if you feel you've taken a much bigger bite than you'd have liked.. you can temporarily ignore this test and write a more granular test that takes out part or atleast mid-way. Then other tests that take you towards the goal of making the first big test pass. Red-Green-Refactor at each step.

Small tests, I think mean that you should not be testing a whole lot of stuff in one test. e.g. Are components D.A, B and C in state1, state2 and state3 after I've called Method1(), Method2() and Method3() with these parameters on D. Each test should test just one thing. You can search SO for qualities of good tests. But I'd consider your test to be a small test because it is short and focussed on one task - 'Getting PartNumbers From Manufacturing Order'

Update: As a To-Try suggestion (AFAIR from Beck's book), you may wanna sit down and come up with a list of one-line tests for the SUT on a piece of paper. Now you can choose the easiest (tests that you're confident that you'll be able to get done.) in order to build some confidence. OR you could attempt one that you're 80% confident but has some gray areas (my choice too) because it'll help you learn something about the SUT along the way. Keep the ones that you've no idea of how to proceed for the end... hopefully it'll be clearer by the time the easier ones are done. Strike them off one by one as and when they turn green.

Gishu
So this reassured me and I sit down to write the test. Even thinking about how to write it is a design experience. I'm not sure if I'm happy with my test, but it is a starting point and causing me to think about a few things.
eyston
This is precisely the most important contribution of TDD.. It drives your design from the client's perspective - forcing you to only add functionality that is needed by atleast one client. Initially you may choose to fake collaborators but that still helps you to nail down the simplest interface between them - which is a big win. No nice-to-have or may-be-required in the future features ; Also continuous refactoring helps keep your design simple, easy to understand and therefore maintainable.
Gishu
I went back and read first 40-50 pages of Beck's book and some stuff sunk in a bit more since my last read (a few months ago). The two things that are helping are a) you can sin to make a test pass and b) you immediately refactor once green. And yes, focusing and thinking in regards to SUT is helping me, especially thinking about all the contexts that the SUT could exist in -- it is leading to stronger, less brittle tests.
eyston
+6  A: 

Here's how I would start. Lets suppose you have absolutely no code for this application.

  1. Define the user story and the business value that it brings: "As a User I want to submit a manufacturing order number and a list of part numbers of that orders so that I can send the list to the inventory system"
  2. start with the UI. Create a very simple page (lets suppose its a web app) with three fields: label, list and button. That's good enough, isn't it? The user could copy the list and send to the inv system.
  3. Use a pattern to base your desig on, like MVC.
  4. Define a test for your controller method that gets called from the UI. You're testing here that the controller works, not that the data is correct: Assert.AreSame(3, controller.RetrieveParts(mfgOrder).Count)
  5. Write a simple implementation of the controller to make sure that something gets returned: return new List<MfgOrder>{new MfgOrder(), new MfgOrder(), new MfgOrder()}; You'll also need to implement classes for MfgOrder, for example.
  6. Now your UI is working! Working incorrectly, but working. So lets expect the controller to get the data from a service or DAO. Create a Mock DAO object in the test case, and add an expectation that the method "partsDao.GetPartsInMfgOrder()" is called.
  7. Create the DAO class with the method. Call the method from the controller. Your controller is now done.
  8. Create a separate test to test the DAO, finally making sure it returns the proper data from the DB.
  9. Keep iterating until you get it all done. After a little while, you'll get used to it.

The main point here is separating the application in very small parts, and testing those small parts individually.

Eduardo Scoz