views:

31

answers:

4

I am a .NET developer (as my name suggests). We recently hired a lead test analyst (where lead implies there will be testers working underneath her), and, I am working under her/with her (I say under as she has the experience to mentor me and check my work). This relationship is fine, and I was put in this position because I have most of the skills a test analyst has (even though I am a developer, and a developer first, which says a lot about how strong I am).

Problem is, developer working as a tester, with a dedicated tester, causes problems. I do web testing using a certain tool and writing the code myself (as I can do things like loops and complex logic and I am a dev so write coding is like my natural instinct), and the tester uses another tool (no way near as strong), using the record/playback method.

Our project manager thinks it is best to be in sync and use a common tool and have a common approach/workflow, but I don't see an issue in using seperate tools (especially as I know the tool the tester is using, but the API is weak). Does it really matter if the tools we use are not the same? As deadlines are tight, if I need to write code, I want to be at my most productive, which I won't be.

Thanks

A: 

The obvious answer is that separate tools testing common areas where possible are probably going to tickle different code paths through the code.

I would definitely approach it as you have where you have a deeper testing effort, as in testing deeper into the code directly below the GUI, along with a WinRunner type GUI approach where you are exercising the code through the presentation layer as well.

HTH

Rob Wells
A: 

OTOH, if you're running different tests, and one of you is telling management "tests pass, everything's fine," and the other says "we still have twelve defects," then you have a problem.

keturn
But if both sets are supposed to pass and they're exercising different code paths then they have still uncovered defects in the program being tested. Defects which must be corrected.
Rob Wells
+1  A: 

Being productive is of course a very important thing. Using separate testing processes while doing the same job has some advantages and disadvantages. Here are some of the disadvantages:

  • No code reuse / mindshare. If you learn something, develop something, it will probably not go beyond your tool.
  • If there are fixed costs involved with these tools, those are duplicated
  • Management may not like it because there is duplication at some level.
  • All parties need to be in agreement that it's a good thing that two tools are in use, or management won't like it.

If your tools satisfy two different techniques of testing, they will naturally be useful in their own right. Just because two things both "test" doesn't mean they are the same thing. I would probably structure my response to management tailored based on what I thought was best for the project.

dpb
A: 

I see it as there is an inherent difference between the types of test a developer needs to run to ensure his code is correct and the tests a tester runs by exercising the user interface. There is much that isn't exercised from the user interface but still has to work correctly and as a dev, you are not exempt from unit testing the code before sending it to test. No reason why these two types of tests can't be done using separate tools as they are not testing for the same thing. What works for a group of testers won't work for a group of developers.

Sit down with the tester and go through an example of the tests you write to test a feature and the ones he or she would create using the other tool. Together create a presentation to management that shows why the two tools are both needed and how using two tools increases the chances of finding more bugs before going to production.

HLGEM