views:

573

answers:

12

I am working on JUnitMax, a project to increase the utility of automated tests. I'm looking for novel, unexpected ways tests prove valuable. For example, I use tests in responding to defects--one at the system level that fails, reproducing the defect and another at the unit level so I know what code to change (perhaps derived using the Saff Squeeze). What other uses have you found for tests?

+1  A: 

I usually run my JSPs and other code which generates HTML in unit tests and I compare the results against "known good" HTML files. So I've extended JUnit with a compareOutput() method which:

  • Folds CR, CR/LF, LF -> LF
  • Strips trailing whitespace
  • Figures the filename from the unit test name
  • Loads the file and compares the HTML against the contents of the file

Following this general direction, I'm missing more ways to compare test results plus maybe an "unimplementedFeature()" method which does nothing unless the System property "failForUnimplementedFeatures" (you get the idea) is set to TRUE. This would allow me to write all the unit tests for all the features I plan ahead of time and then set the property to get a list of missing features after a green run.

Aaron Digulla
+3  A: 

Last year I planned to use them for a research lab study: I had different unit tests to tell me what parts of the task the user have successfully completed, and was going to execute them automatically every few seconds in the background so I would have an "objective" measure of when the user completed each part. I ended up changing the focus of the study to only use one unit test. Still, I feel that's a useful thing.

Many schools use JUnit tests in a similar manner for checking school assignments; the only problem is that unless one writes code directly via the framework, one has to manually run the tests on each student project separately, and to add up the numbers to a score. So generally I would say that some way of giving "values" to unit tests so you can establish a numeric score (E.g., certain tests more important than others) would be useful.

Personally, I would also like the ability to be able to specify dependencies between tests (I use JUnit 3, maybe it's in by the current version), and in the IDE be able to see tests based on these dependencies so if a core test fails, I wouldn't immediately have to see the dependent tests that have failed unless I actually drilled now.

Uri
For test dependencies, take at look at http://smallwiki.unibe.ch/jexample If a "core" test fails, we skip all dependents. Plus, you can pass the fixture from test to test.
Adrian
+4  A: 

As a QAer, my automated tests are generally more at the integration/system level, than the unit test level.

One major benefit that I have come to expect, but which often surprises the developers, is that I am able to detect unexpected changes to the system.

On more than one occasion, I've had to go to the developers and ask who changed something and why. They were often surprised that I knew anything had changed...

Joe Strazzere
+2  A: 

Once I created how-to documents from my functional tests. It looked something like this:

  • create user
    • click link with text "create user"
    • populate text field "e-mail"
    • click button "save"
Željko Filipin
+1  A: 

About 9 months prior to release we had no test department. I asked for test department, I asked again and again and again. Still no test department.

Eventually we created "Wreck-Gar" the one man test department. It was basically just a while(true) script that pressed random buttons (fuzz testing). So we had multiple machines running this random script and found a lot of problems very quickly.

Obviously it's no replacement for a real QA department but we (finally) have one now and Wreck-Gar supplements their efforts (as well as the QA department writing their own scripts).

Quibblesome
Hmm, that would usually be called "monkey testing", rather than "fuzz testing".You found a lot of problems quickly with this tool?
Joe Strazzere
Yes, in the first hour or so of running it we had about six new bug reports and it continued to provide bugs here and there. But this is what happens if you develop for 1 year+ without any QA.The unit tests may pass but certain combinations of features may be fail.
Quibblesome
+2  A: 

CodeGenie does "test first then search the interwebz". CodeGenie uses your test-cases as the query for source-code search. It tries to find and slice parts of other peoples code that might satisfy your tests.

See http://sourcerer.ics.uci.edu/codegenie/

Adrian
A: 

I once took a college course in Computer Graphics. We did many labs involving writing related algorithms (we started with line-drawing algorithms and worked all the way up to scene-tree rendering and animation.)

For many of the labs, the professor was able to provide unit tests that introspected our algorithm results - tests passed if algorithms were implemented correctly and failed otherwise.

Of course, this doesn't really solve any problems in the case of dishonest students (one could always write stub code that returns what the tests expected...a quick decompile enables that even if source isn't provided), but for honest students, it gave instant feedback after each code change, as the professor provided high-quality failure messages in the tests.

Jared
+2  A: 

I have some unit tests were their sole purpose is to serve as an tutorial. The nice advantage is that when the API changes, this documentation always stays up to date.

martinus
A: 

In a JUnit test method, you can automatically generate a filename based on the test name. For instance:

File output = new File("tests/" + getClass().getSimpleName() + "/" + getName() + ".html")
File reference = new File("tests/" + getClass().getSimpleName() + "/" + getName() + ".html.ref");
Wouter Lievens
+2  A: 

If you have some high level smoke tests that verify basic system functionality, they can be really useful as production monitors - if one of the tests fails, you can have the monitoring system send an alert to your IT guys. This might be tricky to hook up to your monitoring system depending on what automation tool you're using, but if you can do it it's much more efficient than writing a whole separate set of monitoring tests when your app goes into production.

gareth_bowles
Agree 100%. Any system that allows tests the same tests that are run at launch-time to be called mid-run easily with known interfaces (jmx, snmp, xmlrpc, etc., etc.) by out-of-the-box monitoring suites is going to be a big plus.
David Berger
A: 
  1. One of our tests checks all xmls that create test data for validity against their dtds. This functions as a meta-test: it checks what other tests are using.

  2. Another test checks copyright message for expiration and is supposed to fail every year reminding us to update it.

  3. Yet another test checks license for expiration reminding to update license too.

  4. Another test checks client (running test) and server (running SUD) clock synchronization.

grigory
A: 

One thing I've used automated testing for is to make up for a (perceived) lack in the language (Java). I had a class which contained a list of other objects. The class had a getter to expose the contents of the list.

I didn't want to return a list from the getter, because then the list could be manipulated directly, so the getter had to return an unmodifiable List. This behaviour was true for all classes and sub-classes for this class, and all getters in the sub-classes.

So the unit test found all sub classes of the class, created an instance, called all of the getters that returned lists and checked that the list returned was unmodifiable.

In Java, you can't specify that a list be Unmodifiable, like you can in Scala. I know there are several other ways to achieve this.

--

Another simple use of unit tests (for integration this time), is to test the configuration & build of a system. We had a ClickOnce VB.NET application deployed daily to our integration test environment. We had a set of tests which checked the manifest for the application, and checked that all of the files specified in the manifest were there, had the correct size etc. This test was part of our deployment process.

MatthieuF