views:

89

answers:

3

I was thinking about well known article "Hardware is Cheap, Programmers are Expensive" by Jeff Atwood and a recommendation of "Keep the Build Fast" by Martin Fowler.

My tried approaches:

  1. I've suffered from too slow tests which was caused by intensive use of database. I like that it is cheap.. at the beginning.
  2. I've tried a multitiered architecture with persistent-arrogant easy tested domain objects. I like that it is maintainable and easy to use. But it is expensive in terms of time.

In case when I have enough time resources I usually choose the second way, but most of the time I have to live with the first one.

How to find a more cost effective approach? Is it the first way with a quite powerful hardware?

+2  A: 

I don't like unit tests that hit anything external, be it a database or the filesystem. Many people would call these integration tests rather than unit tests.

Execution speed is an issue with these kind of tests - but so is setup speed.

You (or anyone else on your team) should be able to check out the code and tests from source control and run them immediately. They shouldn't have to mess around creating databases. The harder the tests are to run, the less useful they are.

Continuous integration is also much easier if your tests have minimal dependencies.

I don't think fast hardware can overcome the drawbacks of having external dependencies, because disk and network access is so much slower than in-memory access. An old machine running dependency-free unit tests will be quicker than a newer maching that has to hit disk and database.

As for the tradeoffs between your two options, I think it's a question of scale. If you have a single programmer on a small project, then I think that approach 1) would suffice.

But the more tests you're running, and the more people who need to run them, the bigger the drawbacks of not seperating out your unit and integration tests will be.

It's really a question of fixed costs verses variable costs. Setting up rigorous unit testing might take some time, but if your project is large enough you can ammortise the time spent over many months of increased productivity.

As always, it depends on the specifics of the project. Good luck!

ctford
Thanks for the answer. I agree with you but the question was about finding balance between these 2 ways. Can you elaborate?
Fedyashev Nikita
Chris, thanks for updated answer so much! It helped me a lot!
Fedyashev Nikita
+3  A: 

Tests touching database aren't unit-tests.

As example I use DB-Unit only for setting up test-data for integration-tests. DB-Unit is a good tool but has a bad name (containing 'Unit').

On our continous integration server, integration-tests (which are also touching DB) are run in a different phase as the normal unit tests. afaik letting checks run in different escalation levels are called 'staged builds'.

manuel aldana
+1  A: 

It is often argued whether tests that need a database should be called "unit tests". Most often the answer is "no". My opinion is - "yes". Why?

Because service methods (units) are tested. These methods have a dependency on the Data Access layer, which in turn depends on a database connection. Now, a purely unit-testing approach would be to mock the dependency - the DAO in this case. This mock should be fairly complex in order to behave as expected. So why creating a complex mock instead of using an in-memory database (HSQLDB/Derby/JavaDB/..), which can be realized as a mock of the runtime setup.

Now, onto the question - in-memory databases are very (logically) fast. This alone can reduce the amount of time for running the tests to levels low enough.

Also, I believe a Continuous Integration engine should be able to divide the tasks: make the buld (and deliverable) first, and run the tests after that. So in the common, every-day case, you won't have to wait for the tests, while the results will appear soon enough to fix anything that has gone wrong.

Bozho
Bozho, please check out my comment in your blog on this answer.
Fedyashev Nikita