views:

157

answers:

2

One of the boasts I've heard of unit testing is that you can run 2000+ tests in a minute or so.. because the only limit is cpu speed and ram. I, however like to include external dependency assertions/testing in my test projects (such as: does the user account the application logs on with have insert/update/delete permissions on the appropriate tables to account for if the db is ever migrated?)

Is there a framework or a supported way to utilize MS Tests in such a way that you can choose to run just the unit tests or just the integration tests, or both at the click of a button?

+5  A: 

Yes. :)

In VS2008, when you create a Test Project, Visual Studio will also generate a test metadata file, or vsmdi file. A solution may have only one metadata file. This file is a manifest of all tests generated within the solution across all Test Projects. Openning the metadata file, opens the Test List Editor - a Gui for editting and executing the file.

From the Test List Editor, you may create Test Lists [eg UnitTestList, IntegrationTestList], and assign individual tests to a specific Test List. By default, Test List Editor lists all tests in an "All Loaded Tests" list, and a "Tests Not in a List" list to help in assigning tests. Use these to find or assign groups of tests to lists. Remember, a test may belong to only one list.

There are two ways to invoke a Test List

  • From Visual Studio, each list may be invoked individually from Test List Editor.
  • From command line, MSTest may be invoked with a specific list.

One option is good for developers in their everyday work-flow, the other is good for automated build processes.

I setup something similar on the last project I worked on.


This feature is very valuable*.

Ideally, we would like to run every conceivable test whenever we modify our code base. This provides us the best response to our changes as we make them.

In practice however, running every test in a test suite often means adding execution times of minutes or hours to build times [depending on size of code base and build environment] - which is prohibitively expensive for a developer and Continuous Integration [CI] environment, both of which require rapid turnaround to provide relevant response.

The ability to specify explicit Test Lists allows the developer, CI environment, and Final build environment, to selectively target bits of funcionality without sacrificing quality control or impacting overall productivity.


Case in point, I was working on a distributed application. We wrote our own Windows Services to handle incoming requests, and leveraged Amazon's web services for storage. We did not want to run our suite of Amazon tests every build because

  1. Amazon was not always up
  2. We were not always connected
  3. Response times could be measured in hundreds of milliseconds, which in a batch of test requests can easily balloon our test suite execution times

We wanted to retain these tests however, since we needed a suite to verify behaviour. If as a developer I had doubts about our integration with Amazon, I could execute these tests from my dev environment on an as needed basis. When it came time to promote a Final build for QA, Cruise Control could also execute these tests to ensure someone in another functional area did not inadvertently break Amazon integration.

We placed these Amazon tests into an Integration Test list, which was available to every developer and executed on the build machine when Cruise Control was invoked to promote a build. We maintained another Unit Test list which was also available to every developer and executed on every single build. Since all of these were In-Memory [and well written :] and would execute in about as long as it took to build the project, they did not impact individual build operations and provided excellent feedback from Cruise Control in a timely manner.

*=valuable == important. "value" is word of the day :)

johnny g
+1, well that seems pretty straightforward. Looking at the Test menu more closely it seems pretty self-explanatory from there.
Maslow
what scenarios do you picture this feature being important?
Maslow
Do you maintain the "Unit Test" list by hand? Do you always need to add the newly created test to the test list?
Michał Drozdowicz
@Michał Drozdowicz, in short, yes. maintaining explicit test lists is an opt-in process. however, there are command line options to support "blanket" test invocation [ie all test within a test project]. i do not know these options off hand, but they do exist! i swear it! :)
johnny g
@johnny g It is possible to run all tests from a test project dll, but I believe it doesn't allow for any "exclude" option - you can't just say "Run all but the ones from the 'Integration' list"
Michał Drozdowicz
+4  A: 

NUnit

You can set fixtures up in categories.

I have my I integration tests set with a category. Then using the NUnit GUI you can specify which category to run.

When running in Visual Studio I use TestDriven.NET to run my tests. Tests are separated in my test projects by type, so this makes running unit tests or integration tests separately a lot easier.

Finglas