views:

236

answers:

8

Unit tests are often deployed with software releases to validate the install - i.e. do the install, run the tests and if they pass then the install is good.

I'm about to embark on a project that will involve delivering prototype software library releases to customers. The unit tests will be delivered as part of each release and in addition to using the tests to validate the install, I plan on using the unit tests that test the API as a "contract" for how the release should be used. If the user uses the release in a similar manner to how it is used by the unit tests then great. If they use it some other way then all bets are off.

Has anybody tried this before? Any thoughts on whether this is a good/bad idea?

Edit: To highlight a good point raised by ChrisA and Dan in replies below, the "unit tests that test the API" are better called the integration tests and their intent is to exercise the API and the software to demonstrate the functionality of the software from a customer perspective.

+11  A: 

Sounds like a good idea to me. I (we all?) routinely use unit tests internally to do just that. In using my unit tests to validate that I haven't broken anything I'm also implicitly verifying that my API contract hasn't changed. It seems like a natural usage of unit tests to deploy them in the fashion you're talking about.

JMD
+5  A: 

Agile methodologies say: Tests are specifications, so this is a very good idea.

mouviciel
+1  A: 

It's actually a pretty good idea, and extremely pleasant as an API user.

This technique can actually also be used the other way round : when you're using a "legacy" API, you can use unit tests to document the way you think the API behaves and to validate that it actually behaves as planned.

Axelle Ziegler
+5  A: 

I fully expect to be flamed for this, but I don't understand how a set of unit tests proves anything at all about the kind of things a customer cares about, namely whether the application meets his business requirements.

Here's an example: I've just finished converting a chunk of code to fix a big mistake we made. It was a classic case of over-engineering, and the changes have touched about a dozen windows forms and about as many classes.

It's taken me a couple of days, it's now a lot simpler, we gained some features for free, and we lost a ton of code that did stuff that we now know we never really needed.

Every single one of those forms worked perfectly before. The public methods did exactly what they needed to do, and the underlying data accesses were just fine.

So any unit test would have passed.

Except, sadly, they did the wrong thing - which we didn't realise, except in retrospect. It's as if we'd built a prototype and only after trying to use it, realised that it wasn't right.

So now we have a leaner, meaner, fitter application.

But the things that were wrong, were wrong at a level where unit tests could never have revealed them, so I'm just not understanding how shipping a set of unit tests with an install does anything except give a false sense of security.

Maybe I'm not understanding something, but it seems to me that unless the thing that is shipped functions at the same level as the tests supplied, they prove nothing.

ChrisA
This is a common problem where low level unit tests are considered to be the end all of functional testing.
Torlack
+1  A: 

If you're interested in providing a set of specifications with your code, perhaps you should investigate some of the behavior-driven development tools (nbehave, jbehave, rspec, etc.). These frameworks provide support for describing your tests in given/when/then syntax and outputting formatted results that are in a natural language. See nbehave for an example of a BDD tool for .NET. You can find an excellent description of BDD here

Another option may be for you to write tests using an acceptance testing framework such as fit or fitnesse (or the java-only concordion) and deliver these acceptance tests with the code. Both fit/fitnesse and concordion allow specification of the tests in plain HTML or even Word documents.

The benefit of either approach (BDD or acceptance testing frameworks) is that the results the user sees are more human-readable and understandable.

Jeffrey Cameron
Are you thinking of rspec (unit tests for what you're developing), not rubyspec (tests for the entire programming language)?
Andrew Grimm
Indeed I was, thanks!
Jeffrey Cameron
+1  A: 

If you are releasing a code library, this sounds great.

If you are releasing an ordinary software product with which your users will interact only via a GUI, your unit tests may not be working at the same level of abstraction or may not the most useful tool to assess the behaviour of your product. A really good user manual (yes, this is possible) might be better for that.

Daniel Daranas
A: 

Tests will check requirements.

Requirements define functionality

=> Tests will check functionality.

The problem is, that only functionality can be checked that can be covered by unit tests. Integration or whole system tests won't work.

Otherwise, it's the main approach of TDD to check functionality via unit tests.

furtelwart
A: 

Meszaros calls this "Tests as documentation"

EricSchaefer