Should tests be in the same project as application logic?
It depends. There are trade-offs either way.
Keeping it in one project requires extra bandwidth to distribute your project, extra build time and increases the installation footprint, and makes it easier to make the mistake of having production logic that depends on test code.
On the other hand, keeping separate projects can make it harder to write tests involving private methods/classes (depending on programming language), and causes minor administration hassles, such as making setting up a new development environment (e.g. when a new developer joins the project) harder.
How much these different costs matter varies by project, so there's no universal answer.
Should I have test classes to mirror my logic classes or should I have only as many test classes as I feel I need to have?
No.
You should have test classes that allow for well-factored test code (i.e. minimal duplication, clear intent, etc).
The obvious advantage of directly mirroring the logic classes in your test classes is that it makes it easy to find the tests corresponding to a particular piece of code. There are other ways solve this problem without restricting the flexibility of the test code. Simple naming conventions for test modules and classes is usually enough.
How should I name my test classes, methods, and projects (if they go in different projects)
You should name them so that:
- each test class and test method has a clear purpose, and
- so that someone looking for a particular test (or for tests about a particular unit) can find it easily.
Should private, protected, and internal methods be tested, or just those that are publicly accessible?
Often non-public methods should be tested. It depends on if you get enough confidence from just testing the public interface, or if the unit you really want to be testing is not publically accessible.
Should unit and integration tests be separated?
This depends on your choice of testing framework(s). Do whichever works best with your testing framework(s) and makes it so that:
- both the unit and integration tests relating to a piece of code are easy to find,
- it is easy to run just the unit tests,
- it is easy to run just the integration tests,
- it is easy to run all tests.
Is there a good reason not to have 100% test coverage?
Yes, there is a good reason. Strictly speaking “100% test coverage” means every possible situation in your code is exercised and tested. This is simply impractical for almost any project to achieve.
If you simply take “100% test coverage” to mean that every line of source code is exercised by the test suite at some point, then this is a good goal, but sometimes there are just a couple of lines in awkward places that are hard to reach with automated tests. If the cost of manually verifying that functionality periodically is less than the cost of going through contortions to reach those last five lines, then that is a good reason not to have 100% line coverage.
Rather than a simple rule that you should have 100% line coverage, encourage your developers to discover any gaps in your testing, and find ways to fix those gaps, whether or the number of lines “covered” improves. In other words, if you measure lines covered, then you will improve your line coverge — but what you actually want is improved quality. So don't forget that line coverage is just a very crude approximation of quality.