views:

1682

answers:

6

I recently had cause to work with some Visual Studio C++ projects with the usual Debug and Release configurations, but also 'Release All' and 'Debug All', which I had never seen before.

It turns out the author of the projects has a single ALL.cpp which #includes all other .cpp files. The *All configurations just build this one ALL.cpp file. It is of course excluded from the regular configurations, and regular configurations don't build ALL.cpp

I just wondered if this was a common practice? What benefits does it bring? (My first reaction was that it smelled bad.)

What kinds of pitfalls are you likely to encounter with this? One I can think of is if you have anonymous namespaces in your .cpps, they're no longer 'private' to that cpp but now visible in other cpps as well?

All the projects build DLLs, so having data in anonymous namespaces wouldn't be a good idea, right? But functions would be OK?

Cheers.

A: 

It is a very bad smell indeed but at least the author made it a different configuration.

If everything is in a single file, it means everything needs to be compiled for every change. On projects that can take up to an hour to build, I think we can agree this is kind of a no-no.

This can also causes all kind of bad things with templates, any kind of private type or function, make static variables global, and a long list of other very nighmarish things.

At least it's in a different configuration, but that also means you will have to maintain and test this configuration.

EDIT: Didn't read the question correctly and made changes accordingly.

Coincoin
Thanks. Unfortunately I have no authority to change the existing ones. But for new projects we create, I am definitely avoiding the *All configurations.
Steve Folly
It's a fairly large system we're working on with many components, and I think the intent is that you build the *All configurations first off to optimise build time, then for development you use the standard configurations for components you're working on. Still smelly though!
Steve Folly
Haha, why in a day? Does that have to do with SO reputation points or so?
No as I had no rep on that answer. Despite that I finally decided to keep the answer there, it had to do with letting other people access the answer if they were interested.
Coincoin
A: 

Definitely pathological; I can only guess at the reason why anyone might want to do that (if you, on the other hand, can ask them directly, you should). Normally in C++ you want to do the opposite, keep not only implementation files but also headers well separated. (A common trap of C++ projects being "#include spaghetti", with every header file depending on every other.) Perhaps to stress test the compiler ?

Morendil
+12  A: 

It's referred to by some (and google-able) as a "Unity Build". It links insanely fast and compiles reasonably quickly as well. It's great for builds you don't need to iterate on, like a release build from a central server, but it isn't necessarily for incremental building.

And it's a PITA to maintain.

EDIT: here's the first google link for more info: http://rant.blackapache.net/2007/12/10/the-magic-of-unity-builds/

The thing that makes it fast is that the compiler only needs to read in everything once, compile out, then link, rather than doing that for every .cpp file. It's slightly non-obvious until you realize that the compile/link cycle is usually I/O bound. That's why it's a win.

MSN
It's not just because of I/O. Each single .cpp file often includes many header files. If you compile them separately, then you compile the code in the header files multiple times -- once for each .o file. If you use "Unity Build" then these header files and everything else is compiled only once.
Er... that falls under I/O. But yes, that's more precise.
MSN
+3  A: 

I wonder if that ALL.cpp is attempting to put the entire project within a single compilation unit, to improve the ability for the compiler to optimize the program for size?

Normally some optimizations are only performed within distinct compilation units, such as removal of duplicate code and inlining.

That said, I seem to remember that recent compilers (Microsoft's, Intel's, but I don't think this includes GCC) can do this optimization across multiple compilation units, so I suspect that this 'trick' is unneccessary.

That said, it would be curious to see if there is indeed any difference.

Arafangion
Visual C++ can do whole program optimization with the /LTCG switch (link-time code generation)
Roger Lipscombe
+2  A: 

There is a short video presenting the built-time difference in unitybuild.

Comptrol
+2  A: 

An introduction on "Unity Builds" along with benefits, disadvantages and a complete CMake integration can be found at cheind.wordpress.com.

hth, Christoph

Christoph Heindl