views:

80

answers:

3

Usually when using dependency injection, unit (and other) tests are responsible for creating/mocking dependencies of the system-under-test and injecting them.

However, sometimes the test itself has dependencies, or needs to inject dependencies into the SUT that it can't itself create. For example, when testing classes which interact with a database, the test needs to know connection strings and catalog names etc., which can't be hard-coded since they aren't necessarily the same for everybody running the test.

So, how would you recommend that a test find out these settings? Do some xUnit-style test frameworks provide a way to give dependencies to a test fixture? Should the test class have static properties you populate before running all the tests? Should the test ignore DI practices and just go and get the dependencies from some global place? Other suggestions?

+3  A: 

There's a principle for fully automated tests: you should be able to pull down all the source code from the source control repository and simply run the tests.

Given that the environment (machine) has the correct installation base (i.e. compiler, test framework, database engine if relevant, etc.) the tests are responsible for setting up their Fixture before executing the test cases.

That means that for databases, the tests should

  1. create the database in question
  2. run its tests
  3. delete the database again after the last test case

If, for some reason you can't do that, the only thing you can really do is to have a configuration file in your source control system that contains machine-specific entries for all machines in your testing invironment; e.g. for the machine Tst1 the connection string is one value, but for Tst2 it's another.

This can get ugly really quickly, so it's much easier to have the tests be responsible for Fixture Setup and Teardown, because that means that they can simply use hard-coded values, or values generated on the spot.

This really has nothing to do with DI...

Mark Seemann
+1  A: 

DI fights with dependecies complexity, while your unit tests must be most of the time VERY simple. Typical unit-test would examine one isolated aspect of one isolated class. Instead of all its dependencies you create mocks and (usually) inject them through the CUT (Class Under Test) constructor. You don't typically need DI frameworks here.

But. Some higher-level tests might still requre non-mocked dependencies, obviously. For example, you want to make tests on a large set of data and you don't want to create a special fake data source so you keep it in a real DB (maybe you also do some UI tests with that data). In that case I would still try to keep things as simple as possible, initializing tests in class setup / test setup methods.

You see, you need to be careful here. Whenever you make a large, complicated test you:

  1. Create additional complicated code, that will require support efforts.
  2. Create a test that doesn't have a clear reason to fail. It might fail due to a bad connectivity on that day. You can't rely on its result.
  3. Create a test that can't be run easily and quickly, for example, on check-in. Less people wil run it, more bugs will go through.

etc...

Yacoder
+1  A: 

When you're using the Unit testing framework to do integration tests, you don't really have a DI or a unit testing problem.

What you have are integration tests that leverage the high-powered unit testing framework.

Since they are integration tests, they're different in kind from unit tests. The "stand-alone-ness" doesn't really count anymore.

The best way to get integration test settings that vary from user to user is to get them the same way the final application will get them. If you're working in Java, you might have a properties file. In Python, we have special Django settings files for integration testing.

S.Lott