views:

152

answers:

6

My company is about to purchase an automated testing tool. We are not a big company, and can only afford a single license for the tool. We have an internal dispute whether the tested OS should be the one most commonly used by our clients (XP) or the next generation OS (Windows 7). All possible OS are going to be tested anyway, but in a much smaller scale.

Most of our development is done with PowerBuilder, and all the dev machines run XP. Therefore, we do not use any new feature offered by Vista or 7. This means that if our software runs on 7, it should have no problem running on XP. The other way around is a different story, and therefore has to be tested properly. OTOH, it makes sense that the main test environment is the main production environment.

Given such limited resources, what OS would you focus your tests on?

+7  A: 

Definitely the main environment.

Why waste time testing on windows 7 if your primary user base is XP. Yes, once you've tested on XP, you should definitely test on vista & 7 too, but if you only have the resources to automate the test on one, you should focus on the primary platform.

Simon P Stevens
To me this is more of a test planning issue, rather than an automation issue. Without an automated tool, you would certainly focus your manual test efforts on the environment that represents most of you user base. Automating the tests does not change that fact.
Tom E
I agree with Tom. And Simon. Put automation where most of your client base is.
yoosiba
+3  A: 

You should not assume that because your app runs fine on Windows 7, that it will run on XP. There is an infinite number of changes, potentially breaking, between the two OS versions. Ideally, you should test on every OS you support, this might not be possible, but the main thing is to guarantee it works on your main target.

1800 INFORMATION
My assumption is based on the fact that we use XP for development. Also, we are isolated from the Windows API by PB's VM, which predates Vista. Therefore, the app can't possibly use an OS feature that is supported by Vista or 7 and not by XP.
eran
There are still many differences that would be caught by testing - if your testers run on Windows 7 and run through a code path that would be buggy on XP, you would want to know about it I guess
1800 INFORMATION
A: 

Windows releases are commonly unstable until the first couple of service packs. Jumping on now means you aren't just testing your software but you're also testing on an untested system. If there's a bug, how will you know if it's your program or the new OS?

Your customers will be on XP for some time to come (thanks to Vista it's still popular). Go with what you know.

Besides, you're probably saving 1-2 gigs of RAM that could be better used for your compiler and tools than on window candy and the usual bloat.

SpliFF
That's a good point. However, Windows 7 will probably be much more popular than Vista. If that happens, our current app's dev version will probably get to run on 7. We haven't had any issues with XP for quite some time, and OTOH we barely payed any attention to Vista. I just hate the idea of having a recently released version (within a few months) that will fail to run properly on a widespread OS (7) due to insufficient tests.
eran
Sorry, but calling Windows 7 an "untested system" is utter BS.
Johannes Passing
Fine, "unproven" then. How about "unreleased" or "unfinished"? Pedantic.
SpliFF
+2  A: 

Test what you support. after that, test what you will need to support in the near future, and last, let developpers test "cutting" edge/beta/rtm/alpha OS.

For exemple, if you support XP, then, it's the main OS for testing, if done properly, the resources to test that OS should be minimal, if your next release supports Vista, then, bring Vista in the testing loop and make it a priority.

If you need Windows 7 to be supported, then let the developpers do first runs on it, anyway it might probably need some "coding" and will possibly break automated testing; once it comes to an acceptable level of quality, bring it to the testing loop.

Max
+2  A: 

Time to dust off the old "PowerBuilder 1 and the Windows beta" story. Remember: I wasn't there, this is oral history, and I'm old enough that my memory is starting to embellish my own stories, let alone someone else's.

Powersoft got this major marketing score. They were partnering with Microsoft to release their new product, PowerBuilder, on the same day as the new release of Windows (3.0). Microsoft was trying to prove that this platform they built was suitable for custom line-of-business applications, not just graphics programs and Minesweeper. So, Powersoft got the last release candidate from Microsoft, and they beat on PowerBuilder thoroughly. They were satisfied. On launch day, business men were walking out of the computer store with a copy of Windows under one arm and PowerBuilder under the other. Then the calls started coming in. PowerBuilder was seriously broken, and it was painfully obvious. Microsoft had changed something (presumably with the intention of fixing a bug) between the release candidate and the general availability version that brought PowerBuilder to its knees. Powersoft responded quickly with a fix, but there were many red faces for very many days after.

The moral of the story: Testing against beta means virtually nothing. Unless you're making post-October 22 plans, you shouldn't be planning to do anything more than cursory tests on Windows 7, because you'll need to do the testing all over again when the real Windows 7 ships.

Good luck,

Terry.

Terry
A: 

At the risk of sounding glib, test both. Bear with me.

Start by having an automated build process that can do a clean build of your software from source control (you have source control, right?). Add automated tests. This includes everything from low level unit tests to integration tests to unattended functional tests using something like TestComplete or SmartyScript. Now, since you can now test your entire product (or at least key pieces) without any human interaction, you can run these tests as often as you like.

Create a clean virtual machine to represent a typical client PC. Your development box probably isn't a good example of this. As a part of your automated build process, you can script the virtual machine (at least VMWare and VPC) to start from a known good snapshot, install the latest build of your software, run your automated tests and publish the results.

That was the hard part. Now, simply create a new virtual machine with any combination of operating systems/service packs/memory/etc and repeat the automated tests on each of them.

It sounds like you're adding an awful lot of process. What you're actually doing is taking all of the stuff that can (and therefore should) be automated off your hands, leaving you with more time for more interesting (how to sell it to yourself) and profitable (how to sell it to your boss) things.

Otherwise, just test against the OS most of your customers use and include a disclaimer.

Bruce McGee