tags:

views:

369

answers:

7

Hi,

I have two related question regarding Scrum.

Our company is trying to implement it and sure we are jumping over hoops.

Both question are about "done means Done!"

1) It's really easy to define "Done" for tasks which are/have - clear test acceptance criterias - completely standalone - tested at the end by testers

What should be done with tasks like: - architecture design - refactoring - some utility classes development

The main issue with it, that it's almost completely internal entity and there is no way to check/test it from outside.

As example feature implementation is kind of binary - it's done (and passes all test cases) or it's not done (don't pass some test cases).

The best thing which comes to my head is to ask another developer to review that task. However, it's any way doesn't provide a clear way to determine is it completely done or not.

So, the question is how do you define "Done" for such internal tasks?

2) Debug/bugfix task

I know that agile methodology doesn't recommend to have big tasks. At least if task is big, it should be divided on smaller tasks.

Let say we have some quite large problem - some big module redesign (to replace new outdate architecture with new one). Sure, this task is divided on dozens of small tasks. However, I know that at the end we will have quite long session of debug/fix.

I know that's usually the problem of waterfall model. However, I think it's hard to get rid of it (especially for quite big changes).

Should I allocate special task for debug/fix/system integrations and etc?

In the case, if I do so, usually this task is just huge comparing to everything else and it's kind of hard to divide it on smaller tasks.

I don't like this way, because of this huge monolith task.

There is another way. I can create smaller tasks (associated with bugs), put them in backlog, prioritize and add them to iterations at the end of activity, when I will know what are the bugs.

I don't like this way, because in such case the whole estimation will became fake. We estimate the task, mark it ask complete at any time. And we will open the new tasks for bugs with new estimates. So, we will end up with actual time = estimate time, which is definitely not good.

How do you solve this problem?

Regards, Victor

+3  A: 

For the first part " architecture design - refactoring - some utility classes development" These are never "done" because you do them as you go. In pieces.

You want to do just enough architecture to get the first release going. Then, for the next release, a little more architecture.

Refactoring is how you find utility classes (you don't set out to create utility classes -- you discover them during refactoring).

Refactoring is something you do in pieces, as needed, prior to a release. Or as part of a big piece of functionality. Or when you have trouble writing a test. Or when you have trouble getting a test to pass and need to "debug".

Small pieces of these things are done over and over again through the life of the project. They aren't really "release candidates" so they're just sprints (or parts of sprints) that gets done in the process of getting to a release.

S.Lott
'Architecture, refactoring, utility classes' These are never done because they are never explicit tasks - these are some of your practices/tools you employ to get actual tasks done. Good Answer!
quamrana
Ok. The product is released. What should be done in such case with large redesign, when the big module need to be redesigned?
A: 

"Should I allocate special task for debug/fix/system integrations and etc?"

Not the same way you did with a waterfall methodology where nothing really worked.

Remember, you're building and testing incrementally. Each sprint is tested and debugged separately.

When you get to a release candidate, you might want to do some additional testing on that release. Testing leads to bug discovery which leads to backlog. Usually this is high-priority backlog that needs to be fixed before the release.

Sometimes integration testing reveals bugs that become low-priority backlog that doesn't need to be fixed before the next release.

How big is that release test? Not very. You've already tested each sprint... There shouldn't be too many surprises.

S.Lott
ok. What's happening if something doesn't make sense half done (as example module redesign)? It just won't fit in one sprint.
See below -- it's hard to find something that can't be broken into manageable sprints.
S.Lott
A: 

I would argue that if an internal activity has a benefit to the application (which all backlog items within scrum should have), done is the benefit is realized. For instance, "Design architecture" is too generic to identify the benefit of an activity. "Design architecture for user story A" identifies the scope of your activity. When you've created an architecture for story A, you're done with that task.

Refactoring should likewise be done in context of achieving a user story. "Refactor Customer class to enable multiple phone numbers to support Story B" is something that can be identified as done when the Customer class supports multiple phone numbers.

Mike Brown
Got it (regarding design for user story). Regarding "When you've created an architecture for story A, you're done with that task.". However, if there is no way to check this, you can say "done" immediately.
"Done with the story" also means "enough architecture in place to be done with the story". Architecture supports the story implementation. No more.
S.Lott
A: 

Third Question "some big module redesign (to replace new outdate architecture with new one). Sure, this task is divided on dozens of small tasks. However, I know that at the end we will have quite long session of debug/fix."

Each sprint creates something that can be released. Maybe it won't be, but it could be.

So, when you have major redesign, you have to eat the elephant one small piece at a time. First, look at the highest value -- most important -- biggest return to the users that you can do, get done, and release.

But -- you say -- there is no such small piece; each piece requires massive redesign before anything can be released.

I disagree. I think you can create a conceptual architecture -- what it will be when you're done -- but not implement the entire thing at once. Instead you create temporary interfaces, bridges, glue, connectors that will get one sprint done.

Then you modify the temporary interfaces, bridges and glue so you can finish the next sprint.

Yes, you've added some code. But, you've also created sprints that you can test and release. Sprints which are complete and any one can be a candidate release.

S.Lott
A: 

Sounds like you're blurring the definition of user story and task. Simply:

  • User stories add value. They're created by a product owner.

  • Tasks are activities undertaken to create that value. They're created by the engineers.

You nailed key parts of the user story by saying they must have clear acceptance criteria, they're standalone, and they can be tested.

Architecture, design, refactoring, and utility classes development are tasks. They're what's done to complete a user story. It's up to each development shop to set different standards for these, but at our company, at least one other developer must have looked at the code (pair programming, code reading, code review).

If you have user stories which are "refactor class X" and "design feature Y", you're on the wrong track. It may be necessary to refactor X or design Y before you write code, but those could be tasks necessary to accomplish the user story "create new login widget".

trenton
A: 

We've run into similar issues with "behind-the-scenes" code. By "behind-the-scenes" I mean, has no apparent or testable business value.

In those cases, we've decided to define the developers of that portion of the code were the true "users". By creating sample applications and documentation that developers could use and test we had some "done" code.

Usually with scrum though, you would be looking for a piece of business functionality that used a piece of code to determine "done".

Brad Bruce
A: 

For technical tasks such as refactoring, you can check if the refactoring was really done, e.g. call X does no more have any f() method, or no more foobar() function.

There should be Trust towards the team and inside the team as well. Why do you want to review if the task is actually done ? did you encounter situations where someone claim a task were done ans it wasn't ?


For your second question, you should first really strive to break it into several smaller stories (backlog items). For instance, if you are re-architecturing the system, see if the new and the old architecture can coexist the time to do the portation of all your components from one to the other.

If this is really not possible, then this shall be done separately of the rest of the sprint backlog items, and not integrated before it is "done done". If the sprint ends before the completion of all the tasks of the item, then you have to estimate the remaining amount of work and replan it for the next iteration.

Here are twenty ways to split a story that could help having several smaller backlog items, with really is the recommended and safest way.

philippe