views:

56

answers:

2

Let me describe the scenario:

Our VB.NET web application talks to a third party data provider via webservices. It also saves data in a HUGE SQLServer database which has extensive business logic implemented in stored procs and triggers. The webservice provider also employs convoluted business logic which is quite dynamic.

The dilemma is that both the webservice provider and the sqlserver are essentially live systems and our company has no access to them outside of normal operations (web and SQL calls). Neither offer helpful support on their end.

Additionally, the webservice has no 'test mode' and all calls are treated as live transactions. It is not possible to mimic the logic in either of these systems, nor is it possible getting a copy of the sqlserver DB.

So, my question is:

How do you do any kind of testing (manual or automated) of our VB.NET application against these live systems?

Your suggestions would be appreciated. Thank you.

+2  A: 

Let's start by challenging your assumption "It is not possible to mimic the logic in either of these systems".

Of course you can mimic the behavior, if you use the right tools. Your system interacts with the database by way of a database object; your system should interact with the web service through a web service object. Either or both of these objects can be mocked through the use of an appropriate mocking framework. For any unit test call to the database object, you create a mock for that object, set its expectations and result for the call, and then hand the mock to your Code Under Test (CUT) instead of the 'real' database object. Your code calls the mock, which then compares its arguments against the pre-set expectations, and hands back the expected result (instead of actually communicating with the database). Your code then operates on the result. If the method arguments don't match the expectations, the mock object will throw an exception.

You can read articles about mock objects and unit testing for .Net here:

Mind you, tools like NMock and Typemock make the job easier, but it's still hard -- you need to design your code to be tested, not just write the code first and pray that you can test it later.

You might want to talk to your webservice provider -- every third-party webservice that I've ever interacted with beyond simple queries has had a test mode (you use test credentials and a test server, instead of the live server). Any transactions to the test server get cleaned up at the end of the day. If they don't offer a test service AND their service involves more than simple queries, then I'd strongly recommend finding another service provider.

There's one other strategy that you can take for working with a database, under certain circumstances: use transactions. When you open your database connection during a unit test, open a transaction. At the end of each unit test, rollback the transaction. It's a simple idea, but the devil is in the details, and there will be chaos if you screw up and accidentally commit the transaction. I don't recommend it, but I worked like this for 2 years on one project.

Craig Trader
A: 

It sounds like you're building a line-of-business application: software that will be used in one place, by one company. In that scenario testing is not really that important. The best testers find around 20% of the issues in your code, so whether you test or not, you still have to be prepared to deal with issues in production.

So, I'd skip the testing, and instead focus on powerful scaffolding that allows you to develop the application against the live servers in a controlled way:

  1. Create real good logging. Up to half the code can deal with exception handling and logging. If you can, log to a database.
  2. Create lots and lots of consistency checks. The more assumptions you check, the better. When you do a write operation (like placing a web service order) double the amount of checks. If a check fails, stop the application entirely. Don't limp on: stop and analyze what went wrong.
  3. Develop the application one part at a time. Test each part in close cooperation with the end users. First, test a few items manually; then 10; then 100; and if you and the end users are happy, turn it on.

I know from experience this can work very well. The close cooperation with the end users allows you to understand their needs better, and generally add features that they really appreciate.

A word of warning: although users will think of you as someone who gets thinks done, Enterprise Architects will think of you as a hacker. Depending on the organization you're in, this can be either very healthy or very unhealthy :)

Andomar