Is it okay to use many testing frameworks at once?
Some open-source software projects do use several testing frameworks. A common setup would be the use unit-testing framework with mocking framework if the developers of the project don't want to roll their own mocks.
So when do you reach unit-testing overkill?
You reach unit testing "overkill" quickly and you might have reached it already. There are several ways to overdo testing in general that defeats the purpose of TDD, BDD, ADD and whatever driven approach you use. Here is one of them:
Unit testing overkill is reached when you start writing other types of tests as if they were unit tests. This is supposed to be fixed by using mocking frameworks (to test interactions isolated to one class only) and specification frameworks (to test features and specified requirements). There is a confusion among a lot of developers who seem to think it is a good idea to treat all the different types of tests the same way, which leads to some dirty hybrids.
Even though TDD focuses on unit testing you will still find yourself writing functional, integration and performance tests. However you have to remind yourself that their scope are vastly different from unit tests. This is why there are so many testing tools available as there are different types of tests. There is nothing wrong with using many testing frameworks and most of them are compatible with each other.
So when writing unit tests there are a couple of sweet spots to think about when writing tests:
unit test dirty hybrids integration test
--------- ------------- ----------------
* isolated * using many classes
* well defined | * tests a larger feature
* repeatable | * tests a data set
|
| | |
| | |
v v v
O <-----------------------------------------------------> O
^ ^ ^
| | |
sweet spot world full of pain sweet spot
Unit tests are easy to write and you want to write a lot of them. But if you write a test that has too many dependencies you'll end up with a lot of work once requirements start to change. When code breaks in a unit test that has too many dependencies you have to check through the code of many classes rather than one and only one class. This means you have to check all of its dependencies to see where the problem is which defeats the purpose of unit-testing in TDD sense. In a large project this would be incredibly time consuming.
The moral of this story is, do not mix up unit tests with integration tests. Because simply put: they are different. This is not to say that the other types tests are bad, but they should be treated more as a specifications or sanity checks instead. Just because the test breaks they may not be an indication of the code being wrong. For example:
- If an integration test breaks, there may be a problem with some requirement that you have and need to revise the requirement, remove, replace or modify the test.
- If a performance test breaks, depending on how it was implemented the stochastic nature of that test may lead you to think it was just running slow on that instance.
The only thing to keep in mind is to organize the tests in a way that they are easy to distinguish and find.
Do you need to write tests all the time?
There are times when it is okay to omit test cases usually because verification through manual smoke testing is just easier to do and doesn't take a lot of time. Manual smoke test in this sense is the action of you starting up your application to test the functionality yourself or someone else who hasn't coded your stuff. That is if the automated test you're going to write is all of the following:
- way too complicated and convoluted
- will take a lot of your work time to write
- there is no ready and easy to use testing framework to handle it
- won't give much payoff such as having little chance of regression
- can be done manually with greatly less effort than writing an automated test
…then write it and test it as a manual test case. It's not worth it if the test case will take several days to write when smoke testing it manually only takes a minute.