I see developers frequently developing against a solution containing all the projects (27) in a system. This raises problems of build duration (5 minutes), performance of Visual Studio (such as intellisense latency), plus it doesn't force developer's to think about project dependencies (until they get a circular reference issue).

Is it a good idea to break down a solution like this into smaller solutions that are compilable and testable independent of the "mother" solution? Are there any potential pitfalls with this approach?

+1  A: 

It certainly has its advantages and disadvantages anyway breaking a solution into multiple projects helps you find what you looking for easly i.e if you are looking for something about reporting you go to the reporting project. it also allows big teams to split the work in such a way that nobody do something to break someone else's code ...

This raises problems of build duration

you can avoid that by only building the projects that you modified and let the CI server do the entire build


We have a solution of ~250 projects.

It is okay, after installing a patch for Visual Studio 2005 for dealing fast with extremely large solutions [TODO add link].

We also have smaller solutions for teams with selection of their favorite projects, but every project added has also to be added to the master solution, and many people prefer to work with it.

We reprogrammed F7 shortcut (build) to build the startup project rather than the whole solution. That's better.

Solution folders seem to address the problem of finding things well.

Dependencies are only added to top-level projects (EXEs and DLLs) because, when you have static libraries, if A is dependency of B and B is dependency of C, A might often not need to be dependency of C (in order to make things compile and run correctly) and this way, circullar dependencies are OK for compiler (although very bad for mental health).

I support having fewer libraries, even to the extent of having one library named "library". I see no significant advantage of optimizing process memory footprint by bringing "only what it needs", and the linker should do it anyway on object file level.

Pavel Radzivilovsky
Does separating a system into logical artifacts not assist with the separation of concerns and de-coupling of functionality? In addition, say your solution contains multiple websites; running such a solution might spin-up multiple unneeded web server instances, wasting valuable time. I'm just thinking aloud really...
Ben Aston
You should not run a solution. You should run one project.
Pavel Radzivilovsky
Separation is a noble goal, but more important is code reuse. How much code do you need to change in msword to turn it into msexcel? Not that much. If you built distinct products in one company, almost for sure they deal with similar concepts, and even if not, 30% of code should be reusable infrastructure.
Pavel Radzivilovsky
+3  A: 

Let me restate your questions:

Is it a good idea to break down a solution like this into smaller solutions

The MSDN article you linked makes a quite clear statement:

Important Unless you have very good reasons to use a multi-solution model, you should avoid this and adopt either a single solution model, or in larger systems, a partitioned single solution model. These are simpler to work with and offer a number of significant advantages over the multi-solution model, which are discussed in the following sections.

Moreover, the article recommends that you always have a single "master" solution file in your build process.

Are there any potential pitfalls with this approach?

You will have to deal with the following issues (which actually can be quite hard to do, same source as the above quote):

The multi-solution model suffers from the following disadvantages:

  • You are forced to use file references when you need to reference an assembly generated by a project in a separate solution. These (unlike project references) do not automatically set up build dependencies. This means that you must address the issue of solution build order within the system build script. While this can be managed, it adds extra complexity to the build process.
  • You are also forced to reference a specific configuration build of a DLL (for example, the Release or Debug version). Project references automatically manage this and reference the currently active configuration in Visual Studio .NET.
  • When you work with single solutions, you can get the latest code (perhaps in other projects) developed by other team members to perform local integration testing. You can confirm that nothing breaks before you check your code back into VSS ready for the next system build. In a multi-solution system this is much harder to do, because you can test your solution against other solutions only by using the results of the previous system build.
"You are forced to use file references when you need to reference an assembly generated by a project in a separate solution." - Please explain why you cannot add project references to the .csproj files in the usual way (avoiding the file-references issue).
Ben Aston
@Ben Aston: Simply because Visual Studio does not support it. Project references require the project to be contained in the same solution. If performance gets critical you can still unload the projects that you don't need (Right-click the project, then *Unload Project*).
+1  A: 

The only time I really see a need for multiple solutions is functional isolation. The required libs for a windows service may be different than for a web site. Each solution should be optimized to produce a single executable or web site, IMO. It enhances separation of concern and makes it easy to rebuild a functional piece of the application without building everything else along with it.

+1  A: 

Intellisense performance should be quite a bit better in VS2010 compared to VS2008. Also, why would you need to rebuild the whole solution all the time? That would only happen if you change something near the root of the dependency tree, otherwise you just build the project you're currently working on.

I've always found it helpful to have everything in one solution because I could navigate the whole code base easily.

Alex - Aotea Studios
Well, the company happens to run 2k8 unfortunately. That aside, I agree with your sentiment regarding having all the source to hand. But when the system reaches 25 projects I would argue that having to have them all loaded at any given time for developement would indicate a poor separation of concerns?
Ben Aston
Having separate projects is already separating concerns so in my opinion, it's only useful to have multiple solutions if you either have multiple unrelated products, or if VS just isn't performing well enough with all the projects in one solution.
Alex - Aotea Studios
+1  A: 

Visual Studio 2010 Ultimate has several tools to help you better understand and manage dependencies in existing code:

  • Dependency graphs and Architecture Explorer
  • Sequence diagrams
  • Layer diagrams and validation

For more info, see Exploring Existing Code. The Visualization and Modeling Feature Pack provides dependency graph support for C++ and C code.

Esther Fan - MSFT

Is it a good idea to break down a solution like this into smaller solutions that are compilable and testable independent of the "mother" solution? Are there any potential pitfalls with this approach?

Yes it is a good idea. But first make sure to have as few VS projects/assemblies as possible as explained here: Advices on partitioning code through .NET assemblies

When having several VS solutions, a good idea is also to use the tool NDepend to make several VS running instances collaborate easily: Developing Application-Wide

Patrick Smacchia - NDepend dev