A: 

Are you able to setup an automated build to run during off hours instead of tying up developer's time?

Geoffrey Chetwood
Thanks Rich, but this is to run and test code during development. We already have automated builds
johnc
+5  A: 

Use distributed compilation. Xoreax IncrediBuild can cut compilation time down to few minutes.

I've used it on a huge C\C++ solution which usually takes 5-6 hours to compile. IncrediBuild helped to reduce this time to 15 minutes.

aku
+5  A: 

I posted this response originally here: http://stackoverflow.com/questions/8440/visual-studio-optimizations#8473 You can find many other helpful hints on that page.

If you are using Visual Studio 2008, you can compile using the /MP flag to build a single project in parallel. I have read that this is also an undocumented feature in Visual Studio 2005, but have never tried myself.

You can build multiple projects in parallel by using the /M flag, but this is usually already set to the number of available cores on the machine, though this only applies to VC++ I believe.

Ed Swangren
I use /MP in VS2005 and it works great.
Nick
+3  A: 

Perhaps take some common functions and make some libraries, that way the same sources are not being compiled over and over again for multiple projects.

If you are worried about different versions of DLLs getting mixed up, use static libraries.

Adam Pierce
+4  A: 

If this is C or C++, and you're not using precompiled headers, you should be.

Kristopher Johnson
C# actually, but thanks for the answer
johnc
+1  A: 

If this is a web app, setting batch build to true can help depending on the scenario.

<compilation defaultLanguage="c#" debug="true" batch="true" >

You can find an overview here: http://weblogs.asp.net/bradleyb/archive/2005/12/06/432441.aspx

Daniel Auger
A: 

Kristopher Johnson:
If this is C or C++, and you're not using precompiled headers, you should be.

+1.

We did this on a large (50 or so project) codebase a few years ago and it cut down our build times by a huge amount.

I've never heard of a .NET project taking anywhere near that long to build, so I'm going to assume that you are in fact using C++

Orion Edwards
I'm afraid C# can bloat, possibly not as badly as C++, but it's looking pretty chubby here
johnc
Oh dear :-(I did notice the compiler take a bit of a speed hit when going to C# 2 with generics, but I haven't run a 'huge' project on anything other than .net 1.1 unfortunately
Orion Edwards
+2  A: 

Turn off VSS integration. You may not have a choice in using it, but DLLs get "accidentally" renamed all the time...

And definitely check your pre-compiled header settings. Bruce Dawson's guide is a bit old, but still very good - check it out: http://www.cygnus-software.com/papers/precompiledheaders.html

Shog9
Certainly we can turn off integration to VSS and drive it through Source Safe UI instead. Nice thought
johnc
+2  A: 

I have a project which has 120 or more exes, libs and dlls and takes a considerable time to build. I use a tree of batch files that call make files from one master batch file. I have had problems with odd things from incremental (or was it temperamental) headers in the past so I avoid them now. I do a full build infrequently, and usually leave it to the end of the day while I go for a walk for an hour (so I can only guess it takes about half an hour). So I understand why that is unworkable for working and testing.

For working and testing I have another set of batch files for each app (or module or library) which also have all the debugging settings in place -- but these still call the same make files. I may switch DEBUG on of off from time to time and also decide on builds or makes or if I want to also build libs that the module may depend on, and so on.

The batch file also copies the completed result into the (or several) test folders. Depending of the settings this completes in several seconds to a minute (as opposed to say half an hour).

I used a different IDE (Zeus) as I like to have control over things like .rc files, and actually prefer to compile from the command line, even though I am using MS compliers.

Happy to post an example of this batch file if anyone is interested.

David L Morris
+1  A: 

One cheaper alternative to Xoreax IB is the use of what I call uber-file builds. It's basically a .cpp file that has

#include "file1.cpp"
#include "file2.cpp"
....
#include "fileN.cpp"

Then you compile the uber units instead of the individual modules. We've seen compile times from from 10-15 minutes down to 1-2 minutes. You might have to experiemnt with how many #includes per uber file make sense. Depends on the projects. etc. Maybe you include 10 files, maybe 20.

You pay a cost so beware:

  1. You can't right click a file and say "compile..." as you have to exclude the individual cpp files from the build and include only the uber cpp files
  2. You have to be careful of static global variable conflicts.
  3. When you add new modules, you have to keep the uber files up to date

It's kind of a pain, but for a project that is largely static in terms of new modules, the intial pain might be worth it. I've seen this method beat IB in some cases.

Mark
+21  A: 

The Chromium.org team listed several options for accelerating the build (at this point about half-way down the page):

In decreasing order of speedup:

  • Install Microsoft hotfix 935225.
  • Install Microsoft hotfix 947315.
  • Use a true multicore processor (ie. an Intel Core Duo 2; not a Pentium 4 HT).
  • Use 3 parallel builds. In Visual Studio 2005, you will find the option in Tools > Options... > Projects and Solutions > Build and Run > maximum number of parallel project builds.
  • Disable your anti-virus software for .ilk, .pdb, .cc, .h files and only check for viruses on modify. Disable scanning the directory where your sources reside. Don't do anything stupid.
  • Store and build the Chromium code on a second hard drive. It won't really speed up the build but at least your computer will stay responsive when you do gclient sync or a build.
  • Defragment your hard drive regularly.
  • Disable virtual memory.
Nate
By disable virtual memory I assume you mean disable swap, disabling virtual memory would require a rewrite of the entire OS ;p
Joseph Garvin
+1  A: 

You also may want to check for circular project references. It was an issue for me once.

That is:

Project A references Project B

Project B references Project C

Project C references Project A

Bramha Ghosh
A: 
johnc
+7  A: 

Turn off your antivirus. It adds ages to the compile time.

jdelator
... for the code/compile folder. Turning of AV protection as a blanket-coverage rule isn't a brilliant idea. :o)
Brett Rigby
You don't really need to turn it off, configuring it properly is usually enough. Add exceptions to the file types the compiler/linker works with. Some antivirus packages have these exceptions added by default, some don't.
Cornelius Scarabeus
+5  A: 

We had a 80+ projects in our main solution which took around 4 to 6 minutes to build depending on what kind of machine a developer was working. We considered that to be way too long: for every single test it really eats away your FTEs.

So how to get faster build times? As you seem to already know it is the number of projects that really hurt the buildtime. Of course we did not want to get rid of all our projects and simply throw all sourcefiles into one. But we had some projects that we could combine nevertheless: As every "Repository project" in the solution had its own unittest project, we simply combined all the unittest projects into one global-unittest project. That cut down the number of projects with about 12 projects and somehow saved 40% of the time to build the entire solution.

We are thinking about another solution though.

Have you also tried to setup a new (second) solution with a new project? This second solution should simply incorporates all files using solution folders. Because you might be surprised to see the build time of that new solution-with-just-one-project.

However, working with two different solutions will take some carefull consideration. Developers might be inclined to actually -work- in the second solution and completely neglect the first. As the first solution with the 70+ projects will be the solution that takes care of your object-hierarchy, this should be the solution where your buildserver should run all your unittests. So the server for Continous Integration must be the first project/solution. You have to maintain your object-hierarchy, right.

The second solution with just one project (which will build mucho faster) will than be the project where testing and debugging will be done by all developers. You have to take care of them looking at the buildserver though! If anything breaks it MUST be fixed.

Hace
A: 

I am running a solution with over 70 projects (around 900 class) and it compiles under 1 minutes. I have a dual core, 2 gig rams... your solution must be horribly big if it take you 20 minutes... Are you sure it's Visual Studio and not something else that take all the power of your machine?

Daok
Only 900 classes for 70 projects sounds like quite a small solution to me :)
Eyvind
+5  A: 

Make sure your references are Project references, and not directly to the DLLs in the library output directories.

Also, have these set to not copy locally except where absolutely necessary (The master EXE project).

GeekyMonkey
+1  A: 

Disable file system indexing on your source directories (specifically the obj directories if you want your source searchable)

GeekyMonkey
+10  A: 

I had a similar issue on a solution with 21 projects and 1/2 million LOC. The biggest difference was getting faster hard drives. From the performance monitor the 'Avg. Disk Queue' would jump up significantly on the laptop indicating the hard drive was the bottle neck.

Here's some data for total rebuild times...

1) Laptop, Core 2 Duo 2GHz, 5400 RPM Drive (not sure of cache. Was standard Dell inspiron).

Rebuild Time = 112 seconds.

2) Desktop (standard issue), Core 2 Duo 2.3Ghz, single 7200RPM Drive 8MB Cache.

Rebuild Time = 72 seconds.

3) Desktop Core 2 Duo 3Ghz, single 10000 RPM WD Raptor

Rebuild Time = 39 seconds.

The 10,000 RPM drive can not be understated. Builds where significantly quicker plus everything else like displaying documentation, using file explorer was noticable quicker. It was a big productivity boost by speeding the code-build-run cycle.

Given what companies spend on developer salaries it is insane how much they can waste buy equiping them with the same PCs as the receptionist uses.

How would a SSD compare to the raptor. Even faster i gues
PoweRoy
Yup. My Laptop with an Intel X25M is faster in all aspects than my desktop with a WD Raptor.
CAD bloke
It might sound surprising, but it currently isn't worth investing into a 10000 RPM drive. The reason is that the better 7200 RPM drives are faster at the outer rim. So, what one must do is create a small partition. The first partition is at the outer rim, this partition will be faster than a 7200 RPM drive, plus you still have space for a second large partition to store things on.
Cornelius Scarabeus
A: 

It's sure there's a problem with VS2008. Because the only thing I've done it's to install VS2008 for upgrading my project which has been created with VS2005. I've only got 2 projects in my solution. It isn't big. Compilation with VS2005 : 30 secondes Compilation with VS2008 : 5 minutes

There must be another issue there, 2 projects should run fine on a decent machine
johnc
+1  A: 

We have our source code divided into several solutions. This is good for cutting down the compilation time but creates some problems if you happen to break files used in multiple solutions.

The problem I have is that linking is horribly slow in VS2008. Compilation doesn't seem to be a big issue (using IncrediBuild) but oh boy the linking just takes ages. I mean, what on earth is taking so long when all it's supposed to be doing is copying bits from a few files into one dll.

Just now I finished a compilation of 170 files. Compiling the code took 6 mins and linking took another 6 mins.

If anyone knows what could be wrong I would greatly appreciate it?

despaired dev
+1  A: 

If it's a C++ project, then you should be using precompiled headers. This makes a massive difference in compile times. Not sure what cl.exe is really doing (with not using precompiled headers), it seems to be looking for lots of STL headers in all of the wrong places before finally going to the correct location. This adds entire seconds to every single .cpp file being compiled. Not sure if this is a cl.exe bug, or some sort of STL problem in VS2008.

Chris O
+2  A: 

Looking at the machine that you're building on, is it optimally configured?

We just got our build time for our largest C++ enterprise-scale product down from 19 hours to 16 minutes by ensuring the right SATA filter driver was installed.

Subtle.

JBRWilkinson
Drive speed is certainly a contributing factor
johnc
Not optimally configured. 2gb RAM is way too little to start with.
TomTom
+1  A: 

I notice this question is ages old, but the topic is still of interest today. The same problem bit me lately, and the two things that improved build performance the most were (1) use a dedicated (and fast) disk for compiling and (2) use the same outputfolder for all projects, and set CopyLocal to False on project references.

Some additional resources:

Thomas K
+1  A: 

There's undocumented /MP switch in Visual Studio 2005, see http://lahsiv.net/blog/?p=40, which would enable parallel compilation on file basis rather than project basis. This may speed up compiling of the last project, or, if you compile one project.

Pavel Radzivilovsky
+1  A: 

Some analysis tools:

tools->options->VC++ project settings -> Build Timing = Yes will tell you build time for every vcproj.

Add /Bt switch to compiler command line to see how much every CPP file took

Use /showIncludes to catch nested includes (header files that include other header files), and see what files could save a lot of IO by using forward declarations.

This will help you optimize compiler performance by eliminating dependencies and performance hogs.

Pavel Radzivilovsky
+2  A: 

Before spending money to invest in faster hard drives, try building your project entirely on a RAM disk (assuming you have the RAM to spare). You can find various free RAM disk drivers on the net. You won't find any physical drive, including SSDs, that are faster than a RAM disk.

In my case, a project that took 5 minutes to build on a 6-core i7 on a 7200 RPM SATA drive with Incredibuild was reduced by only about 15 seconds by using a RAM disk. Considering the need to recopy to permanent storage and the potential for lost work, 15 seconds is not enough incentive to use a RAM disk and probably not much incentive to spend several hundreds of dollars on a high-RPM or SSD drive.

The small gain may indicate that the build was CPU bound or that Windows file caching was rather effective, but since both tests were done from a state where the files weren't cached, I lean heavily towards CPU-bound compiles.

Depending on the actual code you're compiling your mileage may vary -- so don't hesitate to test.

Jonathan
Nice idea, thanks
johnc
+1  A: 

When choosing a CPU: L1 cache size seems to have a huge impact on compilation time. Also, it is usually better to have 2 fast cores than 4 slow ones. Visual Studio doesn't use the extra cores very effectively. (I base this on my experience with the C++ compiler, but it is probably also true for the C# one.)

Cornelius Scarabeus
+1  A: 

I'm also now convinced there is a problem with VS2008. I'm running it on a dual core Intel laptop with 3G Ram, with anti-virus switched off. Compiling the solution is often quite slick, but if I have been debugging a subsequent recompile will often slow down to a crawl. It is clear from the continuous main disk light that there is a disk I/O bottleneck (you can hear it, too). If I cancel the build and shutdown VS the disk activity stops. Restart VS, reload the solution and then rebuild, and it is much faster. Unitl the next time

My thoughts are that this is a memory paging issue - VS just runs out of memory and the O/S starts page swapping to try to make space but VS is demanding more than page swapping can deliver, so it slows down to a crawl. I can't think of any other explanation.

VS definitely is not a RAD tool, is it?

I had that problem with VS2005 too - definitely paging
johnc