views:

1122

answers:

10

I am a big fan of agile software development which include code-reviews,testing extensively. But my question who should be ACTUALLY TESTING the code. Is it the developer, who wrote it from scratch and who by the way writes test cases(just like me) for each feature he/she rolls out? Or a dedicated tester, who didn't touch(code) the feature at all previously, and just starting testing(breaking) the software in which ever possible way he/she can?

+9  A: 

How about both?

You want to have as many different test stages as possible, as each is likely to uncover different sets of bugs. Especially the developer herself is going to be biased to overlook the same problematic cases that she did not consider when writing the code in the first place.

Thilo
+3  A: 

Both - unit tests should be written by developers (ideally as TDD). Many other tests such as regression tests, usability tests, etc can be executed by dedicated testers with good results.

Brian Rasmussen
+21  A: 

Developers, automated build server, and QA engineers should all run tests. All depends on your size of your organization, product, and the styles of the development. But unit testing does not equal to all of the testing, and no matter how small your team is I think having a QA engineer helps drive the testing (and often requirements).

The Joel Test: 12 Steps to Better Code.

If your team doesn't have dedicated testers, at least one for every two or three programmers, you are either shipping buggy products, or you're wasting money by having $100/hour programmers do work that can be done by $30/hour testers. Skimping on testers is such an outrageous false economy that I'm simply blown away that more people don't recognize it.

Top Five (Wrong) Reasons You Don't Have Testers

  1. Bugs come from lazy programmers.
  2. My software is on the web. I can fix bugs in a second.
  3. My customers will test the software for me.
  4. Anybody qualified to be a good tester doesn't want to work as a tester.
  5. I can't afford testers!
eed3si9n
Well said, but I had no idea programmers got $100/hour.
Sydius
At the todays's GBP/USD exchange rate, I don't *quite*. But it's the right ballpark.
kpollock
actually the 4th point "# Anybody qualified to be a good tester doesn't want to work as a tester." is quite common, sadly. Hard to find testers who just love to do their work.
kender
@kender, Joel acknowledges it's "very hard to hire good testers" in the linked article. Still not a good reason not to have a tester.
eed3si9n
@kender and point 4. If you have that guy that is **so** qualified and you pay him 30$/hour and ask him to do same lame tasks as others not so qualified guys, WHY do you still wonder that he quits. It's like wondering why developers don't work at McDonalds. One must be given tasks accordingly two what he is capable to do and then he must be paid accordingly.
yoosiba
+2  A: 

As many different people as possible should do the testing, because everyone uses software differently.

The developers can only test that a feature works as they understand it.

Testers can test for features as they are defined in their test plan and find out if the software passes the defined test cases.

The customer(s) can test whether the software does what they expected it to do.

Each of these groups has a very different angle to look at the software.

"Innocent bystanders" can also be used to test the software. They are often not biased in any way and provide yet another unique test approach.

HS
+2  A: 

Unit Tests are not exactly the same as QA and user acceptance testing. Ideally, your organization would have professional testers who specialize in those areas, regardless of whether you are doing TDD or not. Having only the developers test is very sub-optimal in terms of QA.

BobbyShaftoe
+4  A: 

Both. Developers definitely should be writing and running unit tests on their code. If you have a dedicated tester, they would be writing and running other types of tests -- integration tests, functional tests, acceptance tests. The latter may be automated and may be developed by or in conjunction with the developer. I don't see much reason for the tester to be running unit tests unless they are part of the overall testing process since they shouldn't be getting code that doesn't pass unit tests.

tvanfosson
+2  A: 

I'd say: both.

There are a few levels of testing: unit tests, integration tests, usability tests...

The developer should include unit tests in the code and make sure they all pass, so no future change / adding new feature breaks some earlier functionality.

But this won't cover all tests you want to have - you want someone to abuse your application in every possible way, try to break it down into pieces to uncover something that isn't yet coded by you.

Then you might want to sit a complete n00b at your application to see if he/she can use it - in my opinion, if the application requires new user to read 400-pages manual before he can start using it - it's either very complicated by it's nature or it's badly written/implemented, from usability point of view.

kender
+2  A: 

The law of diminishing returns applies here:

When your unit test coverage reches a certain point (i'd say 70-80%), the cost of testing the last 20-30% escalates. (You can exceed 100% coverage but I'm simplifying)

Same thing happens with web-tests and integration tests.

The net effect of testing the last 20% within each test discipline has a certain "gold plating" aspect; you probably don't want to do it because the ROI decreases with each additional percentage.

So the pragmatic approach is to aim for 70-80% coverage in all types of testing. The rest you let your customers catch (!)

And yes, the developer does all of these tests. The tester does some of them, not the automated ones.

Edit: My point is NOT the numbers or about using coverage to determine when to stop. There is a certain point where increasing test coverage becomes radically more expensive. IMO that's when you should increase coverage by switching to a different test method, to avoid diminshing returns. Traditional "testers" are usually involved in only some of these techniques.

krosenvold
I wholeheartedly disagree on the approach of using code coverage to decide on whether you tested enough. I really don't see it as a reasonable approach to let customers catch significant bugs.
Ilja Preuß
Well there aren't going to be any significant ones left for your customers ;) I'll add an edit
krosenvold
+10  A: 
Gishu
Very good answer!
Ilja Preuß
Oh yes, they do love the top-right quadrant ...
Thilo
+1  A: 

This is a topic which is discussed in almost all the public forums. Even I do agree. The Unit testing 'should'be done by the Developer him self beacuse noone else understand the codes better that the developer who wrote it , right from the scratch.

But tests such as System testing, UAT etc should actually done by the testers for the best results, there are a couple of reasons for this but the reason that relly makes a difference is the way that a tester think, which is very different from the way that a developer think. They think like users and thus help to find out the flaws in the functionality and the usability.testing is not just about test cases,

I would agree and appreciate Kender just for using the word 'ABUSE' which is so god damn appropriate for the whole concept of testing is all about.

Ayreej