views:

1292

answers:

8

Need some advice on working out the team velocity for a sprint.

Our team normally consists of about 4 developers and 2 testers. The scrum master insists that every team member should contribute equally to the velocity calculation i.e. we should not distinguish between developers and testers when working out how much we can do in a sprint. The is correct according to Scrum, but here's the problem.

Despite suggestions to the contrary, testers never help with non-test tasks and developers never help with non-dev tasks, so we are not cross functional team members at all. Also, despite various suggestions, testers normally spend the first few days of each sprint waiting for something to test.

The end result is that typically we take on far more dev work than we actually have capacity for in the sprint. For example, the developers might contribute 20 days to the velocity calculation and the testers 10 days. If you add up the tasks after sprint planning though, dev tasks add up to 25 days and test tasks add up to 5 days.

How do you guys deal with this sort of situation?

+1  A: 

FogBugz uses EBS (Evidence Based Scheduling) to create a probability curve of when you will ship a given project based on existing performance data and estimates.

I guess you could do the same thing with this, just you would need to enter for the testers: "Browsing Internet waiting for developers (1 week)"

Chris Bartow
A: 

Sounds to me like your system is working, just not as well as you'd like. Is this a paid project? If it is, you could make pay be a meritocracy. Pay people based on how much of the work they get done. This would encourage cross discipline work. Although, it might also encourage people to work on pieces that weren't theirs to begin with, or internal sabotage.

Obviously, you'd have to be on the lookout for people trying to game the system, but it might work. Surely testers wouldn't want to earn half of what devs do.

Rob Rolnick
+2  A: 

Since Agile development is about transparency and accountability it sounds like the testers should have assigned tasks that account for their velocity. Even if that means they have a task for surfing the web waiting for testing (though I would think they would be better served developing test plans for the dev team's tasks). This will show the inefficiencies in your organization which isn't popular but that is what Agile is all about. The bad part of that is that your testers may be penalized for something that is a organizational issue.

The company I worked for had two separate (dev and qa) teams with two different iteration cycles. The qa cycle was offset by a week. That unfortunatey led to complexity when it came to task acceptance, since a product wasn't really ready for release until the end of the qa's iteration. That isn't a properly integrated team but neither is yours from the sound of it. Unfortunately the qa team never really followed scrum practices (No real planning, stand up, or retrospective) so I can't really tell if that is a good solution or not.

Matthew
Was the QA iteration the same length as the dev one ? if so, you get only 1 week delay, which is not that dramatic.
Stefano Borini
+3  A: 

We struggle with this issue too.

Here is what we do. When we add up capacity and tasks we add them up together and separately. That way we know that we have not exceeded total time for each group. (I know that is not truly scrum, but we have QA folks that don't program and so, to maximize our resources, they end up testing and we (the developers) end up deving.)

The second think we do is we really focus on working in slices. We try to pick tasks to get done first that can go to the QA folks fast. The trick to this is that you have to focus on getting the least testable amount done and moved to the testers. If you try to get a whole "feature" done then you are missing the point. While they wait for us they usually put together test plans.

It is still a work in progress for us, but that is how we try to do it.

Vaccano
+1  A: 

This might be slightly off what you were asking, but here it goes:

I really don't like using velocity as a measure of how much work to do in the next sprint/iteration. To me velocity is more of a tool for projections.

The team lead/project manager/scrum master can look at the average velocity of the last few iterations and have a fairly good trend line to project the end of the project.

The team should be building iterations by commitment as a team. Keep picking stories until the iteration has a good amount of work that the team will commit to complete. It's your responsibility as a team to make sure you aren't slacking by picking to few and not over committing by picking to many. Different skill levels and specialties work themselves out as the team commits to the iteration.

Under this model, everything balances out. The team has a reasonable work load to accomplish and the project manager has a commitment for completion.

Michael Groner
+1  A: 

Make the testers pair-program as passive peers. If they have nothing to test, at least they can watch out for bugs on the field. When they have something to test, in the second part of the week, they move to the functionality/"user story compliance" level of testing. This way, you have both groups productive, and basically the testers "comb" the code as it goes on.

Stefano Borini
A: 

Hey.

First answer for velocity, than my personal insight about testers in scrum non cross functional team and early days of every sprint.

I see there inconsistency. If team is not cross functional you distinguish testers and developers. In this case you must distinguish them also in velocity calculation. If the team is not cross functional testers don’t really increase your velocity. Your velocity will be max of what developers can implement but no more than what testers can test (if everything must be tested).

Talk to your scrum master, otherwise there will always be problems with velocity and estimation.

Now as for testers and early days of sprint. I work as tester in not cross functional team with 5 devs, so this answer may be bit personal. You could solve this in two ways: a)change work organization by adding separate test sprint or b) change the way testers work.

In a) you crate separate testing sprint. It can happen in parallel to devs sprint (just shifted those few days) or you can make it happen once every two or three dev sprints. I have heard about this solutions but I had never worked this way.

In b) you must ask testers to review their approach to testing activities. Maybe it depends on practices and tools you use, or process you follow but how can they have nothing to do in this early days? As I mentioned before I work as tester with 5 developers in not cross functional team. If I would wait with my work until developer ends his task, I would never test all features in given sprint. Unless your testers perform only exploratory testing they should have things to do before feature is released to test environment. There are some activities that can be done (or must be done) before tester gets feature/code into his hands. The following is what I do before features are released to test environment:
- go through requirements for features to be implemented
- design test scripts (high level design)
- prepare draft test cases
- go through possible test data (if change that is being implemented is manipulating data in the system you need to make snapshot of this data, to compare it later with what given feature will do to this data)
- wrap up everything in test suites
- communicate with developer as feature is being developed - this way you can get better understanding of implemented solution (instead of asking when he has his mind already on other feature)
- you can make any necessary changes to test cases as feature evolves

Then when feature is complete you: - flesh out test cases with any details that not known to you earlier (it is trivial, but button name can change, or some additional step in wizard appears)
- perform tests
- rise issues
. Actually I find my self to spend more time on first part (designing tests, and preparing test scripts in appropriate tool) than actually performing those tests.

If they to all they can right away instead of waiting for code to be released to test environment it should help with this initial gap and it will minimize risk of testers not finishing their activities before end of sprint.

Of course there will always be less for testers to do in the beginning and more in the end, but you can try to minimize this difference. And even when above still leaves them lots of time to waste at the beginning you can give them no coding involved tasks. Some configuration, some maintenance, documentation update, other.

yoosiba
A: 

The solution is never black and white as each sprint may contain stories that require testing and others that dont. There is no problem in Agile of apportioning a tester for example; for 50% of their time during a sprint and 20% in the next sprint. There is no sense in trying to apportion a tester 100% of their time to a sprint and trying to justify it. Time management is the key.

Liam Reilly