views:

1250

answers:

8

While trying to advocate more developer testing, I find the argument "Isn't that QA's job?" is used a lot. In my mind, it doesn't make sense to give the QA team all testing responsibilities, but at the same time Spolsky and others say you shouldn't be using the $100/hr developers to do something a $30/hr tester could be doing. What are the experiences of others in a company with a dedicated QA team? Where should the division of work be drawn?

Clarification: I meant QA as a validation and verification team. Devs should not be doing the validation (customer-focused testing), but where is the verification (functional testing) division point?

+3  A: 

There should always be some developer testing. If a developer is producing too many bugs, then he/she is wasting time later on fixing those bugs. It is important that the developers don't develop the attitude which says, oh well if I leave a bug, it will be caught and I will get a chance to fix it.

We try to keep a threshold for bugs produced. If this threshold is crossed during testing then the developer is answerable for it. It is up to you to decide what this threshold is (for us it can vary from project to project).

Also, all unit testing is done by the developers.

Vaibhav
+2  A: 

I have only been in the industry for a year, but in my experience dev's are responsible for unit testing their features, while QA is responsible for testing scenarios. QA would also be expected to test any boundry conditions.

RedDeckWins
A: 

Here are some ways that developer testing is the most efficient / highest payoff:

  • Developer modifies a shared library while working on a feature - dev has insight into possible side effects that QA / validation don't
  • Developer is unsure of performance of library call and writes a unit test
  • Developer discovers path of use case not considered in spec that code must support, writes code, updates spec, writes test

It's arguable how much test duty should be carried out by the dev in the third example, but I argue that it's most efficient for the dev because all of the related minutiae from many layers of documentation and code are already in her short-term memory. This perfect storm may not be attainable by a tester after the fact.

Are we talking about QA or validation? I think of QA along the lines of inspection checklists, code standards enforcement, UI guidelines, etc. If we're talking validation, it doesn't make sense for devs to spend a lot of time authoring and executing formal test cases, but devs must provide all of the rationale and design documentation needed to author good tests.

Aidan Ryan
+10  A: 

It's the difference between "black box" testing (where you know what the code us supposed to do, but not how it works), and "white box" testing (where knowing how it works drives how you test it). "Black box" testing is what most people think of when you mention Quality Assurance.

I work for a company where the QA team are also software developers. (That narrows the field a lot if you care to guess the company.) I know Joel's opinion, and my experience leads me to partially disagree: for the same reason that a "white hat" hacker is more effective finding security holes, certain kinds of errors are more effectively found by white box testers who know how to write code (and therefor what the common mistakes are - for example, resource management issues like memory leaks).

Also, since QA-oriented developers are part of the process from the initial design phase, they can theoretically help to drive higher-quality code throughout the process. Ideally, for each developer working on the project with a mental focus on functionality, you have an opposing developer with a mental focus on breaking the code (and thus making it better).

Seen in that light, it's less a matter of using developers for testers than it is kind of disconnected pair-programming where one developer has an emphasis on controlling quality.

On the other hand, a lot of testing (such as basic UI functionality) frankly doesn't need that kind of skill. That's where Joel has a point.

For many businesses, I could see a system where programming teams trade off code review and testing duties for each others' code. Members of the Business Logic team, for example, could spend an occasional tour testing and reviewing code for the UI team, and vice-versa. That way you're not "wasting" developer talent on full-time testing, but you are gaining the advantages of exposing the code to (hopefully) expert scrutiny and punishment. Then, a more traditional QA team can take up the "black box" testing.

Wing
+2  A: 
Gishu
A: 

My general stance is that testers should never find unit level bugs (including boundary cases). The bugs testers find should be at the component, integration, or system level. Of course, at first testers may find "happy path" bugs and other simple bugs, but these anomalies should be used to help developers improve.

Part of your problem could be using $100 dollar per hour developers and $30 per hour testers :}. But regardless of the cost, I think knowing that bugs found earlier in the development cycle are inevitably cheaper, you'd probably still save money by having the developers own more testing. If you have a highly paid dev team and hack testers, you will probably find a lot of the big obvious issues, but you'll miss a lot of the more obscure bugs that will come back to haunt you later.

So, I suppose the answer to your question is that testers should test as much as you want them to. You can fire all of your testers and have the developers do all of the testing, or you can hire an army of testers and let the developers check in whatever they want.

Alan
+2  A: 

When appropriate, Quality Control teams should be able to conduct Security, Regression, Usability, Performance, Stress, Installation/Upgrade testing and not Developers

Developers should do unit testing with code-coverage for the code being written as a minimal goal.

IN between, there is still quite a bit of testing to be done

  • full code path testing
  • Component Testing
  • Integration Testing (of components)
  • System (integration) testing
  • etc

The responsibility for these are mixed between QA and Development based on some mutual agreement on what makes most sense. Some component testing can only be done by unit testing, others are 'sufficiently' tested during integration testing etc.

Talk to each other, find out what everyone is most comfortable doing. It will take some time, but it's well worth it.

not-bob
+2  A: 

Testing should be as automated as possible, which turns it back into dev work if the testers are writing code that gets added to the automated test suite.

Also, I've found that we get a lot of QA done in code review, as people will suggest extra edge and corner cases they want to see added to the unit tests that are being reviewed (along with the code they test of course).

pjz