views:

84

answers:

3

I am the newest member of a project that is an amalgam of various Applications written in various Programming Languages on both Unix and Windows operating systems. I get the 'honor' of figuring out how to implement a nightly Regression Build/Test for all these various Apps.

Unfortunately, these Apps were NOT built with TDD principles and do not have any significant Unit Testing frameworks. My instinct is screaming at me to try and avoid re-inventing the wheel and to "try" to find some way to have as much code reuse as possible for this Nightly Test Architecture.

How would someone write Test Cases that share as much code as possible.. when faced with multiple languages across multiple operating systems... and compounded by the fact that not all the Apps are Web Services or even Web Apps ?

My only conclusion is that the Test Drivers and Test Cases must be specific to each App and I can not have any significant code reuse.

Any suggestions or offers to provide a swift Kick In The Head for asking this Question will be welcomed and appreciated :)

+1  A: 

This is a tough one I have seen before. I think you are going to have to come to a decision on this point eventually but to begin with, a slightly different approach might help. It looks like this app has been around. There must be one or more bugbases kicking around that you can survey to find out the most frequent type of bug. Apps generally have an aspect that is most prone to defects and that is where I would start with some test scripts. You are essentially regressing the most productive bug reports any old way you can and stitching these scripts together any old way you can.

Once you know this app, and you will know it very soon after doing the above, you can come up with a grander, and easier to maintain, harness or app to test with. Hope this helps.

Andrew Cowenhoven
A: 

Just my 2 cents worth...

In order to implement wholescale developer testing relatively successfully, as far as I have understood, you need the whole development to be involved in writing test code.

Perhaps if you can facilitate a common interface to the various apps and services, that could give you some headway.

Ola Eldøy
A: 

It's hard to tell how feasible it would be in your case... but it would be great if you could come up with a declarative mechanism of describing your test cases, perhaps using text files or XML to detail the parameters, expected outputs, expected return codes, etc. of the various cases. This way, if these test cases are valid across multiple OSes/environments, you could implement the code to execute the test cases once for each environment but be able to reuse all test cases.

Of course, your mileage may vary depending on the complexity of the interfaces / scripts / apps you need to test, and how easy it would be to express the test cases with data.

As for coming up with test cases, I've also previously been responsible for writing tests for old, "legacy" code that wasn't authored with "testability" in mind. I like Andrew's suggestion; using previous bug/regression data would be useful to find which tests would give you the most bang for your buck. It would also be a good idea to try to implement new engineering processes on your team--for each new bug/issue/regression fixed from now on, try to add a test case that would have caught the issue. This will help you build up a set of test cases that are provably relevant...

Reuben