Yes. :)
In VS2008, when you create a Test Project, Visual Studio will also generate a test metadata file, or vsmdi file. A solution may have only one metadata file. This file is a manifest of all tests generated within the solution across all Test Projects. Openning the metadata file, opens the Test List Editor - a Gui for editting and executing the file.
From the Test List Editor, you may create Test Lists [eg UnitTestList, IntegrationTestList], and assign individual tests to a specific Test List. By default, Test List Editor lists all tests in an "All Loaded Tests" list, and a "Tests Not in a List" list to help in assigning tests. Use these to find or assign groups of tests to lists. Remember, a test may belong to only one list.
There are two ways to invoke a Test List
- From Visual Studio, each list may be invoked individually from Test List Editor.
- From command line, MSTest may be invoked with a specific list.
One option is good for developers in their everyday work-flow, the other is good for automated build processes.
I setup something similar on the last project I worked on.
This feature is very valuable*.
Ideally, we would like to run every conceivable test whenever we modify our code base. This provides us the best response to our changes as we make them.
In practice however, running every test in a test suite often means adding execution times of minutes or hours to build times [depending on size of code base and build environment] - which is prohibitively expensive for a developer and Continuous Integration [CI] environment, both of which require rapid turnaround to provide relevant response.
The ability to specify explicit Test Lists allows the developer, CI environment, and Final build environment, to selectively target bits of funcionality without sacrificing quality control or impacting overall productivity.
Case in point, I was working on a distributed application. We wrote our own Windows Services to handle incoming requests, and leveraged Amazon's web services for storage. We did not want to run our suite of Amazon tests every build because
- Amazon was not always up
- We were not always connected
- Response times could be measured in hundreds of milliseconds, which in a batch of test requests can easily balloon our test suite execution times
We wanted to retain these tests however, since we needed a suite to verify behaviour. If as a developer I had doubts about our integration with Amazon, I could execute these tests from my dev environment on an as needed basis. When it came time to promote a Final build for QA, Cruise Control could also execute these tests to ensure someone in another functional area did not inadvertently break Amazon integration.
We placed these Amazon tests into an Integration Test list, which was available to every developer and executed on the build machine when Cruise Control was invoked to promote a build. We maintained another Unit Test list which was also available to every developer and executed on every single build. Since all of these were In-Memory [and well written :] and would execute in about as long as it took to build the project, they did not impact individual build operations and provided excellent feedback from Cruise Control in a timely manner.
*=valuable == important. "value" is word of the day :)