Thank you Maxim for your suggestion.
I did look at PNUnit briefly but had forgotten about it. My main reservation for it was the need to enumerate each individual tests (not just test fixture) in the configuration XML files. We could automate the generation of these files but I was hoping for a bit more of a streamlined solution.
After a discussion with the team, we really have two issues.
1) The build server should run all UATs against all browsers, reporting the results seperately (or at least noting which errors came from which browser), and run them in parallel (assuming there are RCs available) so that the build doesn't take forever.
2) Developers would like an easier way of kicking off a test or group of tests against some subset of the browsers without rebuilding the project and without a lot of other manual changing.
Also, if multiple RCs are available, ideally same browser tests could run multiple at a time to speed things up
My best idea for problem 1 is an individual cruise build for each browser environment that run independently. This seems like overkill but is the simplest solution I could come up with and should work.
For problem 2 I don't yet have a solution. I have considered writing a custom Resharper plugin to allow for concurrent execution and/or the setup of environment variables on a session of tests so that each session can be configured to run against a particular environment without effecting another. However I don't know yet if this is worth the time to learn/develop the plugin.