views:

122

answers:

2

Hey,

I had to start writing some unit tests, using QualityTools.UnitTestFramework, for a web service layer we have developed, when my approach seemed to be incorrect from the beginning.

It seems that unit tests should be able to run in any order and not rely on other tests.

My initial thought was to have the something similar to the following tests (a simplified expample) which would run as an ordered test in the same order.

AddObject1SuccessTest
AddObject2WithSameUniqueCodeTest
(relies on first test having created object1 first then expects fail)
AddObject2SuccessTest
UpdateObject2WithSameUniqueCodeTest
(relies on first test having created object1 and thrid test having created object2 first then expects fail)
UpdateObject2SuccessTest
GetObjectListTest
DeleteObjectsTest
(using added IDs)

However, there is no state between tests and no apparent way of passing say added IDs to the deletetest for example.

So, is it then the case that the correct approach for unit testing complex interactions is by scenario?

For example

AddObjectSuccessTest
(which creates an object, gets it to validate the data and then deletes it)
AddObjectWithSameUniqueCodeTest
(which creates object 1 then attempts to create object 2 with a fail and then deletes object 1)
UpdateObjectWithSameUniqueCodeTest
(which creates object 1 then creates object 2 and then attempts to update object 2 to have the same unique code as object 1 with a fail and then deletes object 1 and object 2)

Am I coming at this wrong?

Thanks

+4  A: 

It is a tenet of unit testing that each test case should be independent of any other test case. MSTest (as well as all other unit testing frameworks) enforce this by not guaranteeing the order in which tests are run - some (xUnit.NET) even go so far as to randomize the order between each test run.

It is also a recommended best practice that units are condensed into simple interactions. Although no hard and fast rule can be provided, it's not a unit test if the interaction is too complex. In any case, complex tests are brittle and have a very high maintainance overhead, which is why simple tests are preferred.

It sounds like you have a case of shared state between your tests. This leads to interdependent tests and should be avoided. Instead you can write reusable code that sets up the pre-condition state for each test, ensuring that this state is always correct.

Such a pre-condition state is called a Fixture. The book xUnit Test Patterns contains lots of information and guidance on how to manage Fixtures in many different scenarios.

Mark Seemann
Sometimes setting up the pre-condition state can be expensive. Do you have any thoughts on how to handle such a situation?
Eric J.
If at all possible, a so-called Immutable Shared Fixture is the best approach. As a parallel, many concurrency best practices also apply to managing Test Fixtures: If at all possible, don't share state. If you must share state, it is easiest if it's read-only, etc. An Immutable Shared Fixture is a Fixture that never changes. You can then build one-off Fixtures on top of the Immutable Shared Fixture, making sure that these Fixtures are torn down after each test case. Often it is the immutable part of the Fixture which is the most expensive (e.g. setting up a DB), so this pattern works well.
Mark Seemann
+2  A: 

As a complement to what Mark said, yes, each test should be completely independent from the others, and, to use your terms, each test should be a self-contained scenario, which can run independently of the others.
I assume from what you describe that you are testing persistence, because you have in your steps the deletion of the entities you created at the end of the test, to clean up the state. Ideally, a unit test is running completely in memory, with no shared state between each test. One way to achieve that is to use Mocks. I assume you have something like a Repository in place, so that your class calls Repository.Add(myNewObject), which calls something like Repository.ValidateObjectCanBeAdded(myNewObject). Rather than testing against the real repository, which will add objects in the database and require to delete them to clean the state after the test, you can create an interface IRepository, with the two same methods, and use a Mock to check that when your class calls IRepository, it is exercising the right methods, with the right arguments, in the right order. It also gives you the ability to set the "fake" repository to any state you want, in memory, without having to physically add or delete records from a real storage.
Hope this helps!

Mathias
@Mathias, the Mocks seem like a good way to test your business logic -- it doesn't care about what repository it is dealing with. But then how do you test your actual data access? If I'm using mocks, then how can I test database changes, stored procedure or query changes, and data access code? It seems to me in many LOB applications data access is an important and sizable part of the application.
Tuzo
@Mathias, thanks for the info, but this sounds like building a lot of test infrastructure that simply replicates the existing data layer in a way that doesnt use the existing data layer, meaning more code requiring maintanence as things change, when I just want to test that the application is performing as expected to requirements, and suggests that tests of the mock repository could pass when the application is not working correctly itself?
Adam Fox
@Tuzo and Adam: I could probably have been clearer, my apologies. Somewhere in your system, a class is responsible for persisting, and should be tested. However, this is typically considered integration testing rather than unit testing. It's expensive (complex setup and long to run), which is why mocks can help: you can run your expensive tests less frequently, but have unit tests that check quickly that the system will behave properly, if the "expensive" class is implemented following the right contract.
Mathias