views:

56

answers:

2

Hey all, I've been tasked to re-architect our build process here at work and I would like to start including unit testing (nUnit) to our projects. Through my research I've got a good grasp on the technology I'll be using, but I wanted to get some advice on best practices on setting up my solution for testing and to make sure my proposed solution is sound.

Our main VS 2008 Solution has about 4 projects. For each project I am going to create a corresponding unit test project and add them to our solution. I would like our developers to start developing off this solution, and all code checked in will go back to trunk (using SVN). For our build process, I will use a continuous integration server to build and test off of our development code in trunk (with the unit tests). As long as that is building, I want to have a deployment solution that has my 4 projects and (but no unit tests) and push that code for QA to, for example Test | Staging then ultimately Production. As I push code to each environment, my goal is to not have the unit test projects pushed with that code.

From my description, does this sound like a typical process? If yes or no does anyone here have suggestions to optimize this process?

Thanks.

+1  A: 

Why would you have two different solutions? Just use the one solution which includes the unit tests, and then select only the output of the production projects to ship to QA/Test. (Or if QA/Test receive full source code, let them still build the unit test projects and just ignore them.) Having multiple solutions sounds like extra effort for no gain.

Alternatively, if you really want a build with no unit tests, you could have one solution configuration (like Debug and Release) which just doesn't build the unit test projects. In the menu where you'd normally select Debug or Release, select "Configuration Manager..." and then in the next dialog, click the drop down for "Active solution configuration" and pick "< New >" to create a new one. Pick an appropriate configuration to copy from (probably Release) and then just untick the unit test projects.

Personally I'd still just build everything though...

Jon Skeet
Cool, thanks Jon. I figured you could do something like that w/ different solution configurations. My only reasoning to not deploy our code w/ the test assemblies was to make sure we deploy only what we need.
@gb1200: I wasn't suggesting *deploying* the test assemblies... just building them :)
Jon Skeet
+1  A: 

Personally, I don't like splitting tests from the code being tested. If the code is written to be really unit-testable, I'm finding it better to have unit test code for a particular class in the same file as the class itself. This way it's simpler for the developers keep track of unit tests as they introduce changes or new features into the code.

However, separate unit testing projects are almost necessary when inter-system dependencies need to be taken into account. Separating tests (integration rather than unit tests) gives the freedom of test execution environment setup and avoids unwanted pieces of code to be introduced into the main code base. On the other hand, you'll have to use [assembly:InternalsVisibleTo("test_assembly_name")] to enabled test code to access internal members.

Conditional compilation can be used to avoid test code from release builds. Although, in some special scenarios, it might be useful to include unit test code even in a release build and enable the application to perform a self-test. Example: an interface is declared with a semantic contract that the implementor has to supply specific attributes to methods / properties / the class implementing the interface. In case the application is able to load some add-in modules (assemblies not known at compile-time), a self-testing capability may help ensuring the whole system will work.

Ondrej Tucny