views:

1279

answers:

5

I am currently working on a project that has been in production for over two years. The project makes extensive use of unit testing and scripted UI tests. Initialy unit tests covered the system framework, business rules and state transitions (or workflow). Test scripts are used for black box testing. However, over time the cost of maintaining our full set of unit tests has become increasingly expensive especially those relating to state.

After a bit of investigation we have found that test scripts are more effective (that is, provide better coverage) and are cheaper to maintain than unit tests relating to workflow. This isn't to say the value of unit tests has been completely negated but it does raise this question whether some classes of unit tests can be dropped in favour of test scripts.

Our project is run on an iterative incremental model.

+5  A: 

One of the answers on SO to the question of 'the limitations of Unit Testing' was that a unit testing becomes convoluted when it's used to test anything to do with INTEGRATION rather than function. Connecting to and using external services (database, SSH'ing to a another server, etc.) and User Interfaces were two of the examples used.

It's not that you CANT use Unit Testing for these things, its just that the difficulty involved in covering all the bases makes using this method of testing not worth it except in cases where reliability is paramount.

I use "script tests" for all my custom JavaScript UI code (template engine, effects and animations, etc.) and I find it quick and reliable if done right.

David McLaughlin
A: 

Two different things

  • Unit Tests - Developer - verify if the code is right
  • Acceptance Tests - Customer/QA/BA - verify that the right code is developed.

The two categories should be distinct and both play an equally important role.. Dropping one doesn't bode well. Test scripts as you have mentioned fall into the second category. I would recommend something like FIT / Fitnesse for this purpose. If that is not feasible, then test-scripts / record-replay style tools. But don't throw away good unit tests.. what do you mean by 'cost of maintaining tests has become expensive'?

Gishu
In general unit tests must change to reflect new business requirements but in some cases changes cut across numerous aspects of our system and the impact of updating all of the impacted unit tests is very high. In this case test scripts have proven cheaper to modify than unit tests.
Richard Dorman
You should be very scared of changes that have such an impact on unit tests. The unit tests are probably paying for themselves right now, by helping you spot and understand the changes you have introduced.
slim
This may be a bit of thread necromancy... but just because something is not a "unit test" doesn't mean that it's an acceptance test. There are many layers of verifying that the code is right, from the unit layer all the way up to the entire system fitting together. I use external test scripts at my place of work, and they are *definitely* only verifying that the code is right, and not that the right code is developed.
Tom
+3  A: 

You normally use Unit Tests to do exactly this: test units. More exactly, to test if a unit complies to its interface specification/contract. If a unit test fails, you know exactly where the problem is: it is within the tested unit. This makes it easier to debug, especially since the result of a unit test can be processed automatically. Automatic regression tests come to mind here.

You start using scripted UI tests if you either want to leave the scope of unit tests or want to test things that cannot be tested well with unit tests. E.g. when you test code that interfaces with lots of external APIs that you cannot mock. Now you can provoke certain errors but tracking down where exactly in the code the failure is buried is much harder.

xmjx
This problem is more applicable to large systems where the number of existing unit tests is high. In some cases new requirements have a higher impact on unit tests than they do on test scipts. In an ideal world I would have all the unit tests updated but in practise this is costly.
Richard Dorman
+1  A: 

Actually, there are four levels of testings, 3 of them might involve script, the first one does not.

  • unit testing: test a class or method in complete isolation of the rest of the system
  • assembly testing: test a scenario within the system in complete isolation from other components external to the system (that is from a different functional domain)
  • integration testing: test the system including inputs coming from external part, and outputs going to external other system (that is from other functional domains).
  • acceptance testing: final validations, as say Gishu, to check the right code (that is the right features) is there.

Example of functional domain: Service Layer Bus, that is all projects (usually encapsulating some core referential databases) able to expose their services on a bus.
You may do:

  • unit tests for your publisher class
  • assembly test for your publisher mechanism in collaboration with other component of your SLB
  • integration test for you SLB service and other components developed outside of SLB and client of your services
  • acceptance tests for the whole system.

As said, the last 3 kinds of tests can involve heavy scripting and can quickly cover more of your code. Depending on the sheer number of classes/methods to unit-test, a good assembly test can be a better approach.

VonC
+8  A: 

In more ways than one I have experienced the same kind of pain that you have vis-a-vis unit tests, especially in projects wherein the members are not at all enthusiastic about unit testing and many of them simply ignore or comment-out tests to be able to cheat source control, save time, etc. One former colleague even coined the term "Deadline Driven Development" for this.

In my opinion when facing this kind of challenge, the following are some guidelines vis-a-vis unit testing:

  • Discard obsolete tests - Sometimes it is pointless in trying to update hundreds to thousands of lines of tests if they are, in essence, inaccurate or irrelevant. Discard tests immediately. Do not "Ignore" them, do not comment them out. Delete them completely.
  • Write tests for new functionality - Any new functionality still needs to be unit tested and written in a unit-testable manner.
  • Write tests for bug fixes - When regression testing the application, it might be relevant to ensure that bug fixes have unit tests that ensure that the bug has been fixed.
  • To hell with code coverage - This might earn a few downvotes, I'm sure, but there is a fine line between ensuring functionality and using tests as an excuse to delay work. The focus should be on ensuring core functionality than following any arbitrary code coverage percentage.

That being said, I still think that unit testing should not be discarded completely. Test scripts and unit tests have their own purposes. But a balance should be struck between an over-zealous attempt to maintain TDD and facing the realities of enterprise application development.

Jon Limjap