views:

819

answers:

8

What ratio of [senior] developers to testers do people think is best?

Obviously this will depend partly on development/maintenance throughput, but is there a rule-of-thumb that a new company/project might work from?

Also, would you use 'pure' testers, or would you combine testing with other roles (e.g. documentation, user training, etc)?

Obviously, answers may depend on company strategy / development models used, so please specify if you're answering in general, or if for a specific style of product/release, and so on.

+4  A: 

There was a recent, relevant article on InfoQ that you might find interesting.

Kevin
Thanks, it looks like the article itself has more useful links also.
Peter Boughton
+3  A: 

In my opinion, a good metric to use in determining the number of testers needed is the complexity of the requirements, not the number of developers. If I were hiring testers, I would take a look at the list of requirements (or break the design document into a list of requirements if necessary), and think about how much testing time each requirement would need to verify that it was working correctly. I'd use that initial analysis to hire a base of testers, and then add testers later if the workload turned out to be too high for my initial base.

If you're putting together a budget and hiring testers later isn't an option, you might want to budget in slightly more testing resources than your analysis indicates. Too much is always better than not enough.

Whether to use "pure" testers is another question that's really dependent on how many testing resources you need. I've found that a good compromise is to hire testers who are capable of other jobs, and use them in other capacities at times when the testing load is light.

Edit: If you're lucky enough to have a set of acceptance tests early on, please substitute "acceptance tests" for "requirements list" above. :-)

MattK
The number of developers varies with complexity too, so there bound to be some correlation.
Arkadiy
+1  A: 

I would say (depending on the speed you need things tested) with automation you could have like 1 or 2 testers for each 5 developers.

Why:

  • With automation they just need to worry about testing the new modules
  • Regression tests will take care of the older ones.
  • 1 or 2 testers can easily cover all the work 5 developers will do for example week/week
  • A good ratio I've been taught was that for every 10 hours of developing, the quality assurance team will take arround 3 or 4 hours to track most of the defects those 10 hours generated.

Hope it will help :)

fmsf
+4  A: 

Joel makes a good argument for 1 tester for every 2 engineers, as well as covering the excuses people use for not having those testers.

ctacke
+1  A: 

Also, would you use 'pure' testers, or would you combine testing with other roles (e.g. documentation, user training, etc)?

It depends on the type of testing, but I would not burden testers with other roles. Competent test engineers are worth their weight in gold (the same as competent software engineers). If you give them tasks outside of their domain of expertise, you're going to slow them down and p*ss them off. Do software engineers like doing documentation or user training? Usually not. Neither do testers.

However, there's nothing wrong with supplementing your test team with people from other areas, especially for usability testing, acceptance testing, quick reviews, etc.

Patrick Cuff
+3  A: 

I blogged about this once here. The most relevant excerpt is below.

"I've seen high quality products produced on a 10:1 dev:test ratio, and horrible products created with a 1:1 ratio. The difference is in attention and care for quality. If everyone (including management) on the team deeply cares about the product quality, it has a good chance of happening regardless of the ratio. But if quality is something that is supposed to be tested into the product, by all means have at least 1 tester for every developer - more if you can get them."

Alan
+1  A: 

There is no generalized "good" ratio.

Clearly, the time required to test something is contextual - it depends on factors that may have little or nothing to do with how long it took to develop that feature.

Also consider:

  • what counts as Development?
  • what counts as Testing?
  • If we were going to perform regression testing anyway, does that count as "zero" additional testing hours?

see: http://www.sqablogs.com/jstrazzere/150/What+is+the+%22Correct%22+Ratio+of+Development+Time+to+Test+Time%3F.html

Joe Strazzere
+2  A: 

First of all, developers to testers is a good rule of thumb, but it's a bad rule.

What you need to consider is how many use case your application has. Applications which will be interacted with by users in an uncontrolled manner (I.E. web applications or desktop applications) will require more testers than a similar console application.

An application that takes a single file and detects regex patterns in it will require fewer testers than a new OS.

While those are general maxims, the practical advice would be to use some sort of approximate formula based on these factors

1) How many (compartmentalized) use cases are there?

I say compartmentalized use cases because if you include state changes and persistent variables, then seemingly unrelated parts of a program can turn out to be related. I.E. 2 + 2 = 4 (1 use case) 2 * 2 = 4 (2nd use case). That's two simple operators so, two classes of use cases. However, if you can add then multiply, then you can't check ONLY add and multiply individually, you must check them in all their possible permutations.

When examining the number of use cases, make sure you include the use cases that involve chaining of commands.

2) How long does it take to test each one?

This doesn't mean (to extend the calculator metaphor) only adding 2 + 2 and looking at the answer. You must include the time it takes to recover from a crash. If the answer is incorrect, you would expect the tester to log the bug with a screenshot and specific instructions on how to recreate the bug. If you don't give them time for this kind of administrating work, then you are baking into your plan the assumption that you have no bugs. And if we're assuming that, then why have testers at all ;)

A complex project will have both a high number of use cases, and a high number of developers, but they are not a guaranteed correlation. You are better off examining your use cases thoroughly and making an educated decision about how many testers will be required.

Added Bonus. Once you've broken down the application so thoroughly, you might find some use cases that were never considered in the design phase, and fix them before a tester finds them.

Thanks. Thinking in terms of use cases and testing time is a good way to address this problem, and as you say, breaking down the application like this can also help to identify potential problems to be checked/fixed.
Peter Boughton