views:

538

answers:

9

I am using Visual Studio, and it seems that getting rid of unused references and using statements speeds up my build time on larger projects. Are there other known ways of speeding up build time. What about for other languages and build environments?

What is typically the bottleneck during build/compile? Disk, CPU, Memory?

What is a list of/are good references for distributed builds?

+2  A: 

Fixing your compiler warnings should help quite a bit.

Robert Greiner
Only if he uses the "cl" command line tool. Writing stdout into a file is not slow - even on Windows.
Lothar
I'm just curious if this is simply due to writing them to a logfile that causes the slowdown, so if you have few warnings, or there is no bottleneck, it will not really matter?
esac
any link to back this up? our program generates 5000+ compiler warnings and i have been looking for a good excuse to remedy that for quite some time
Chris Shouts
+1  A: 

Visual Studio supports parallel builds, which can help, but the true bottleneck is Disk IO.

In C for instance - if you generate LST files your compile will take ages.

sylvanaar
What's an LST file?
Joseph Garvin
A listing file. Its an output of assembly instructions for the given source code. Many times it also includes the source code text inline with the assembly so you can see what instructions are associated with which lines of source code.
sylvanaar
A: 

Don't compile with debug turned on.

Taylor Leese
This is a bad suggestion. Debug builds are often necessary. In fact, most of your builds will probably end up being debug builds because that's the mode you're most likely using as you develop.
Joseph Garvin
I didn't say it isn't useful to compile with w/ debug turned on. The question asked how to improve compile time only.
Taylor Leese
+1  A: 
Lothar
What does using smart pointers have to do with it?
Joseph Garvin
+3  A: 

Buy a faster computer

ParmesanCodice
This isn't always the case. I have a project at work that I just upgraded from 2.83GHz last-gen to 3.2GHz processors, both quad core. I double the amount of memory from 8GB to 16GB. I switched from RAID0 7200RPM to RAID0 15K SAS, and I still do not see an improvement in build time. There seem to be other factors to take into consideration.
esac
This will only help a bit. Distribution (see my answer) will give you many more cycles. We went from 3GHz to around 900GHz when going distributed. :)Regards,Sebastiaan
Sebastiaan Megens
I guess upgrade from 1 core to 8 core would improve the performance a lot - Maybe disk I/O would be the bottleneck in this case.
lz_prgmr
+1  A: 

At my previous job we had big problems with compilation time and one of the strategies we used was called the Envelope pattern see here.

Basically it attempts to minimize the amount of code copied in headers by the pre-processor by minimizing header size. It did this by moving anything that wasn't public to a private friend class, here's an example.

foo.h:

class FooPrivate;
class Foo
{
public:
   Foo();
   virtual ~Foo();
   void bar();
private:
   friend class FooPrivate;
   FooPrivate *foo;
};

foo.cpp:

Foo::Foo()
{
   foo = new FooPrivate();
}

class FooPrivate
{
    int privData;
    char *morePrivData;
};

The more include files you do this with the more it adds up. It really does help your compilation time.

It does make things difficult to debug in VC6 though as I learned the hard way. There's a reason it's a previous job.

ReaperUnreal
If you aren't happy with what this solution did to your maintenance cycle, then why suggest it at all?
Will Bickford
As a warning. I'm suggesting it so that people know it exists, and avoid it at all costs. It does work, it does reduce compile time, but like I said, debugging is nearly impossible on VC6.
ReaperUnreal
+3  A: 

The biggest improvement we made for our large C++ project was from distributing our builds. A couple of years ago, a full build would take about half an hour, while it's now about three minutes, of which one third is link time.

We're using a proprietary build system, but IncrediBuild is working fine for a lot of people (we couldn't get it to work reliably).

Hope this helps.

Regards,

Sebastiaan

Sebastiaan Megens
+1 distributed builds = $$
Will Bickford
+1  A: 

If you're using a lot of files and a lot of templated code (STL / BOOST / etc.), then Bulk or Unity builds should cut down on build and link times.

The idea of Bulk Builds to break your project down into subsections and include all the CPP files in that subsection into a single file. Unity builds take this further by having a Single CPP file that is compiled that inludes all other CPP files.

The reason this is often faster is:

1) Templates are only evaluated once per Bulk File

2) Include files are opened / processed only once per Bulk File (assuming there is a proper #ifndef FILE__FILENAME__H / #define FILE__FILENAME__H / #endif wrapper in the include file). Reducing total I/O is a good thing for compile times.

3) The linker has much less data to work with (Single Unity OBJ file or several Bulk OBJ files) and is less likely to page to virtual memory.

EDIT Adding a couple of links here on stack overflow about Unity Builds.

Adisak
+2  A: 

Be wary of broad-sweeping "consider this directory and all subdirectories for header inclusion" type settings in your project. This will cause the compiler to have to iterate every directory until the header file requested is found, and can be a very expensive operation for however many headers you include in your project.

fbrereto
I agree it matters, but is this really "very expensive"?
lz_prgmr