views:

52

answers:

5

Our software vendor is currently working on a project to migrate our enterprise scale laboratory system from Tru64 unix to Red Hat. This obviously means recompiling with a new compiler and perform lots of testing.

While the vendor will do their own testing, we also need to do acceptance testing. We don't exactly trust the vendor will be as thorough with their testing as we hope. So I have been tasked to think of things that will need to be tested. This is a laboratory system, so things such as calculations and rounding (and general maths) need to be tested.

But I thought I would ask the SO community for advice on what to test or perhaps past experiences with this sort of thing?

A: 

Check it works on 32 and 64 bit CPUs, spaces in filenames, users don't need admin rights to run it or change configs

One unix to another isn't a huge leap.

Martin Beckett
A: 

If you can come up with a suite of regression tests, you can use those scenarios via an automated tool against the original and ported systems to make sure they match. The QA and UAT tests that you currently run against the system would probably be a good starting point, and then you could add any critical edge cases (such as those that test the math in detail) as needed. Paul's suggestion above about compiler issues would also allow derivation of some good edge cases; I'd recommend looking at that scope from the perspective of both the Tru64 and RHEL compilers.

A fair amount of my recent experience is with JMeter, which has a number of assertions, pre-conditions, and post-conditions that can be evaluated to ensure compliance. A number of the tools in this space would also allow you to do load testing, if appropriate.

If your system doesn't have a remotely accessible interface (like a web-based or socket-based interface), you could potentially do the same thing with scripted tools.

mlschechter
A: 

Thirteen or fourteen years ago, I couldn't move an Informix database from SCO OpenServer to Linux because SCO used 16-bit inode numbers, Linux used 32-bit inode numbers, and Linux's 'personalities' was nowhere near as advanced as it is today. I can appreciate your skepticism.

If you can re-run old experiments with saved data and saved outcomes, that would be my preferred place to start. Given simple datatypes, the precision or range of operations may be vastly different on different compilers/platforms, so I wouldn't be surprised if small differences in output are common, so exact matches may not be realistic, but certainly results should be close enough to not influence the larger 'outcomes' of your testing runs.

Rather than searching for test cases, use it for what you've already done with it. (As an aside, that's also a good way to build test cases for software development.)

sarnold
A: 

Differences in precision between the standard math library functions. They are not the same on different systems. If you need consistent calculations, you will need to replace them. Look into crlibm and/or fdlibm.

arsenm
+1  A: 

You will need to test everything. Whatever you tested in your original environment, you will need to test in your new environment.

Eventually, you'll gain confidence that most of your tests will simply never fail in the new environment. There will surely be a set of tests that will always succeed, as long as the old and new environments are Unix-based systems. That's fine - that's a set of tests you won't need to run constantly. I'd still keep those tests around to run once per release of the new OS or per release of your product, however, just to be safe.

John Saunders