tags:

views:

87

answers:

4

In my last job I was working with a company that was going from NO methodology to utilizing a Scrum/Agile method. Many problems were encountered, including the fact that the Scrum "expert" really didn't know how to implement Scrum effectively.

The plan they used was relatively simple:
1. Start Sprint Planning meetings where the estimate of QA and Dev time was one estimate - not one for QA time and one for Dev time.
2. When the estimate of time reached the total for the Sprint, no more features were added to that sprint.

The major problem was that QA generally doesn't know HOW a Developer is going to write code...after all, they aren't coders! So the QA teams really had no basis to form a decent estimate of time on. Conversely, 99.9% of Developers don't know the difference between sanity testing, functionality testing, regression testing, and UAT testing....so they couldn't accurately estimate what QA time would be necessary for certain features either.

Ultimately I took a bullet for my QA team and took this concern to management, where I was promptly fired for not being able to work in a Scrum environment, but that's really neither here nor there.

But it does make me wonder what the error was here. Was it my issue being rigid and wanting to place hard times on things, or was it the expectation that QA should inherently know how long something is supposed to take to code?

A: 

I suppose you talk to each other, and work out a time budget between the two teams, that you then submit to mgmt.

maxwellb
A: 

This doesn't answer your question, but in my opinion, QA personnel must be able to do everything with code that can be done without development IDE. Understanding design, business logic, how to close loops, design tests, etc - can be done by QA people.

Pavel Radzivilovsky
Which, for the most part, I agree with.However, when someone hasn't even coded something, you don't know where you're going to have to test with brand new code. Therefore, you run into the problem we have...if you don't know WHAT you're testing (and you wouldn't because it hasn't even been designed yet), you can't give a good estimate of how long it would take to properly test it.
KC - QA San Diego
A: 

What I have found to work well is to offset the dev/test cycles. So you code in one iteration and then QA in the next. This gives the QA team time to properly scope the work and not basing their estimate on the developers estimate.

Daniel Zapata
That's a great idea.
KC - QA San Diego
+2  A: 

I've been in both QA and Dev. The process isn't really that much different in either world because it boils down to a simple thing: All estimates are guesses. They're based on experience, hunches, and an assessment of the complexity and risk of a particular set of problems, but they are, at best, good guesses.

You can make them more useful by analyzing the set of known tasks around particular feature areas. In QA, that means looking at the problem from the angles you have available: analyze the variations in any possible user story, analyze the possible inputs if you have a mockup of the screen, and so on. Do some basic arithmetic based on better guesses about how long it takes to manually or automatically run those variations. Make a little 2 dimensional matrix that shows some of the key scenarios based on rough equivalence classes, figure out how much time it would be to a) write automation tests for each item based on previous experience and b) run manual smoke tests, if needed.

Figure out how often you'd need to run those tests during the scheduled timeline. Run a multiplier based on the probability of error (1.5x, 2.0x, sometimes 3.0x) in your judgment and the relative importance of getting it right. If it's really important that you get one feature well tested and less important that another feature be well tested, adjust your estimates accordingly, but identify that assumption in your estimate.

Scheduling is about managing risk, not eliminating it. It's meant to give you a big picture look at what needs to be done. The details are never quite right, and that's ok. I can't think of one time in a project that I've worked on that everything went according to plan, especially on the dev side.

Agile doesn't change the equation that much; it does change the timeline a bit. It's a good idea to make sure there's a little headroom for testing toward the end of a cycle, in spite of the dogma against it, because you also need dev time to fix the issues that will inevitably come up. But you don't have to turn this into "mini-waterfall"; the devs can keep working in the unlikely event that all the features are working, because they can always start to pick off tasks that were lower priority in the iteration.

I wonder if, in your case, the development team was making QA time estimates on your behalf? It's usually not a good idea to let someone else make the call on that. The people with the most skin in the game should have the most heavily weighted opinion. But a lot of developers can make pretty good risk assessments, so it's certainly worth listening to them. In Agile development cycles, the role exclusiveness should ideally be smaller than in Waterfall teams, but I am fairly convinced that some people are simply better at QA tasks, and they will naturally pick off most of that work, even in a team that tries to walk the ideology of Agile. If your problem was that you weren't willing to make estimates without knowing the implementation details, I can say that this is something you'll need to get over; even in old-school methodologies, I rarely had the luxury of complete knowledge.

One thing I would add is this: The people with QA talent should be on the same teams as their Development counterparts. It's ok if their professional development is managed by a different manager, but not ok if they are part of different sprint teams. So if you have a "test team sprint" and a "dev team sprint", in my humble opinion, you're crippling the potential for collaboration and communication between the Dev and QA -focused resources.

JasonTrue