views:

240

answers:

2

I have inherited C/C++ code base, and in a number of .cpp files the #include directives are wrapped in #ifndef's with the headers internal single include #define.

for example

#ifndef _INC_WINDOWS
#include <windows.h>
#endif

and windows.h looks like

#ifndef _INC_WINDOWS
#define _INC_WINDOWS
...header file stuff....
#endif // _INC_WINDOWS

I assume this was done to speed up the compile/preprocess of the code.

I think it's ugly and a premature optimisation, but as the project has a 5 minute build time from clean, I don't want to makes things worse.

So does the practice add any value? speed things up lots? Or is it kosher to clean them up?

update: compiler is MSVC (VS2005) and platform is Win32/WinCE

A: 

If a file is included, then that whole file has to be read, and even the overhead of opening/closing the file might be significant. By putting the guarding directives around the include statement, it never has to be opened. As always with these questions, the correct answer is: try taking out the ifndef/endif guards around the include directives and get your stopwatch...

Daniel Earwicker
So that's the guts of my question, has anybody else timed the difference this causes on a modern code base, with modern tools set.
Simeon Pilgrim
I know of one C++ codebase where doing this took a few minutes off the build time (total build time, two hours). But switching on pre-compiled headers made far more difference. And "modern" probably doesn't signify anything useful here! Every C++ project's build system is probably slightly different, and almost all are based on stone-aged concepts. No "modern" development system bothers its users with this kind of activity.
Daniel Earwicker
good point on 'modern'. I was really meaning not stories from 10 years ago on that pentium 3 pc, running VS6.
Simeon Pilgrim
+6  A: 

It's worth knowing that some implementations have #pragma once and/or a header-include-guard detection optimisation, and that in both cases the preprocessor will automatically skip opening, reading, or processing a header file which it has included before.

So on those compilers, including MSVC and GCC, this "optimisation" is pointless, and it should be the header files responsibility to handle multiple inclusion. However, it's possible that this is an optimisation for compilers where #include is very inefficient. Is the code pathologically portable, and <windows.h> refers not to the well-known Win32 header file, but to some user-defined header file of the same name?

It's also possible that the header files don't have multiple-include guards, and that this check is actually essential. In which case I'd suggest changing the headers. The whole point of headers is as a substitute for copy-and-pasting code about the place: it shouldn't take three lines to include a header.

Edit:

Since you say you only care about MSVC, I would either:

  • do a mass edit, time the build just to make sure the previous programmer doesn't know something I don't. Maybe add #pragma once if it helps. Use precompiled headers if all this really is slowing things down.
  • Ignore it, but don't use the guards for new files or for new #includes added to old files.

Depending on whether I had more important things to worry about. This is a classic Friday-afternoon job, I wouldn't spend potentially-productive time on it ;-)

Steve Jessop
compiler is MSVC, platform is Win32/WinCE. Sounds like a vote to refactor to me.
Simeon Pilgrim