views:

409

answers:

8

I've just started learning Qt, using their tutorial. I'm currently on tutorial 7, where we've made a new LCDRange class. The implementation of LCDRange (the .cpp file) uses the Qt QSlider class, so in the .cpp file is

#include <QSlider>

but in the header is a forward declaration:

class QSlider;

According to Qt,

This is another classic trick, but one that's much less used often. Because we don't need QSlider in the interface of the class, only in the implementation, we use a forward declaration of the class in the header file and include the header file for QSlider in the .cpp file.

This makes the compilation of big projects much faster, because the compiler usually spends most of its time parsing header files, not the actual source code. This trick alone can often speed up compilations by a factor of two or more.

Is this worth doing? It seems to make sense, but it's one more thing to keep track of - I feel it would be much simpler just to include everything in the header file.

+2  A: 

Yes, it sure does help. Another thing to add to your repertoire is precompiled headers if you are worried about compilation time.

Look up FAQ 39.12 and 39.13

dirkgently
Isn't that a VS thing?
Skilldrick
http://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html -- should help.
dirkgently
Thanks! The only thing up till now that I knew about precompiled headers was that an SDL tutorial I followed started by telling me to turn them off in Visual Studio...
Skilldrick
Precomplied headers are very useful for external library headers (i.e. headers that are stable). I wouldn't turn them off blindly. Still, they can only partially fix the problems of a bad include hierarchy.
peterchen
+1  A: 

The standard library does this for some of the iostream classes in the standard header <iosfwd>. However, it is not a generally applicable technique - notice there are no such headers for the other standard library types, and it should not (IMHO) be your default approach to designing class heirarchies.

Although this eems to be a favourite "optimisation" for programmers, I suspect that like most optimisations, few of them have actually timed the build of their projects both with and without such declarations. My limited experiments in this area indicate that the use of pre-compiled headers in modern compilers makes it unecessary.

anon
+1: for "this should not be the default approach", and "measure".
peterchen
Agreed on the "optimisation" comment. With distributed builds, I find that the (parallel) compile phase tends to be shorter than the (single-threaded) link phase, especially when using whole program optimization. Forward declarations may decrease incremental build time, but that's harder to measure, and doesn't affect my team's nightly build.
bk1e
+6  A: 

Absolutely. The C/C++ build model is ...ahem... an anachronism, to say the best, for large projects it become a serious PITA.

As Neil notes correctly, this should not be the default approach for your class design, don't go out of your way unless you really need to.

Breaking Circular include references is the one reason where you have to use forward declarations.

// a.h
#include "b.h"
struct A { B * a;  }

// b.h
#include "a."  // circlular include reference 
struct B { A * a;  }

// Solution: break circular reference by forward delcaration of B or A

Reducing rebuild time - Imagine the following code

// foo.h
#include <qslider>
class Foo
{
   QSlider * someSlider;
}

now every .cpp file that directly or indirectly pulls in Foo.h also pulls in QSlider.h and all of its dependencies. That may be hundreds of .cpp files! (Precompiled headers help a bit - and sometimes a lot - but they turn disk/CPU pressure in memory/disk pressure, and thus are soon hitting the "next" limit)

If the header requires only a reference declaration, this dependency can often be limited to a few files, e.g. foo.cpp.

Reducing incremental build time - The effect is even more pronounced, when dealing with your own (rather than stable library) headers. Imagine you have

// bar.h
#include "foo.h"
class Bar 
{
   Foo * kungFoo;
   // ...
}

Now if most of your .cpp's need to pull in bar.h, they also indirectly pull in foo.h. Thus, every change of foo.h triggers build of all these .cpp files (which might not even need to know Foo!). If bar.h uses a forward declaration for Foo instead, the dependency on foo.h is limited to bar.cpp:

// bar.h
class Foo;
class Bar 
{
   Foo * kungFoo;
   // ...
}

// bar.cpp
#include "bar.h"
#include "foo.h"
// ...

It is so common that it is a pattern - the PIMPL pattern. It's use is two-fold: first it provides true interface/implementation isolation, the other is reducing build dependencies. In practice, I'd weight their usefulness 50:50.

You need a reference in the header, you can't have a direct instantiation of the dependent type. This limits the cases where forward declarations can be applied. If you do it explicitely, it is common to use a utility class (such as boost::scoped_ptr) for that.

Is Build Time worth it? Definitely, I'd say. In the worst case build time grows exponentially with the number of files in the project. other techniques - like faster machines and parallel builds - can provide only percentage gains.

The faster the build, the more often developers test what they did, the more often unit tests run, the faster build breaks can be found fixed, and less often developers end up procrastinating.

In practice, managing your build time, while essential on a large project (say, hundreds of source files), it still makes a "comfort difference" on small projects. Also, adding improvements after the fact is often an exercise in patience, as a single fix might shave off only seconds (or less) of a 40 minute build.

peterchen
If what you said regarding build times increasing exponetially were true, the project I'm working on (which contains abnout 100 source files) should take about 2 ^ 100 seconds to compile from scratch - instead it takes about a minute.
anon
@Neil: For each file you add, the build time increases proportional to the size of the project (because to compile the new file, you have to process some number of headers proportional to the size of the project). That sounds exponential to me.
Jay Conrod
clarified: that is of course the worst case scenario (code-intensive, strongly interdependent headers). In my experience, build speed spirals down once you start hitting the eprformance limits of the machine, and all you can do is making tradeoffs between bottlenecks. --- I'd expect "passive neglect" will normally lead to (only) polynomial grow, but wrong expectations or unsuitable coding standards can lead you down the expontial drainhole.
peterchen
@jay - that sounds like linear to me
anon
The build time per translation unit becomes O(N), so the build time for the entire program becomes O(N*N) - polynomial, not exponential. Passive neglect versus worst-case is a probably a constant 0<f<1, so no big-O difference
MSalters
+6  A: 

I use it all the time. My rule is if it doesn't need the header, then i put a forward declaration ("use headers if you must, use forward declarations if you can"). The only thing that sucks is that i need to know how the class was declared (struct/class, maybe if it is a template i need its parameters, ...). But in the vast majority of times, it just comes down to "class Slider;" or something along that. If something requires some more hassle to be just declared, one can always declare a special forward declare header like the Standard does with iosfwd too.

Not including the header file will not only reduce compile time but also will avoid polluting the namespace. Files including the header will thank you for including as little as possible so they can keep using a clean environment.

This is the rough plan:

/* --- --- --- Y.hpp */
class X;
class Y {
    X *x;
};

/* --- --- --- Y.cpp */
#include <x.hpp>
#include <y.hpp>

...

There are smart pointers that are specifically designed to work with pointers to incomplete types. One very well known one is boost::shared_ptr.

Johannes Schaub - litb
I always followed this strategy, and it does help in reducing the compilation time.
Naveen
+1  A: 

In general, no.

I used to forward declare as much as I could, but no longer.

As far as Qt is concerned, you may notice that there is a <QtGui> include file that will pull in all the GUI Widgets. Also, there is a <QtCore>, <QtWebKit>, <QtNetwork> etc. There's a header file for each module. It seems the Qt team believes this is the preferred method also. They say so in their module documentation.

True, the compilation time may be increased. But in my experience its just not that much. And if it were, using precompiled headers would be the next step.

Mark Beckwith
i think that's a different thing. It often makes sense to group headers together for convenience. Like boost does too ("a few big headers include many small headers") and Qt, like you show, does too. But it does not make much sense to include thousands of lines when all you need is one single line of code that makes a type known as a class-type.
Johannes Schaub - litb
like if you look into QtGui/qregion.h, you will see it does not include the whole of qvector.hpp because it needs QVector in the return type of one member function. But it uses a forward declaration "template <class T> class QVector;" instead.
Johannes Schaub - litb
Ugh. I knew I was going to get voted down for this. It seems SO is all about getting the most popular answer, not the most correct.
Mark Beckwith
@mark - it's strange the notions people get emotionally attached to, isn't it? Motherhood and apple pie I can kind of understand, but forward declarations????
anon
@neil - haha yeah.
Mark Beckwith
A: 

Forward declarations are very useful for breaking the circular dependencies, and sometimes may be ok to use with your own code, but using them with library code may break the program on another platform or with other versions of the library (this will happen even with your code if you're not careful enough). IMHO not worth it.

cube
+1  A: 

There is a HUGE difference in compile times for larger projects, even ones with carefully managed dependencies. You better get the habit of forward declaring and keep as much as possible out of header files, because at a lot of software shops which uses C++ it's required. The reason for why you don't see it all that much in the standard header files is because those make heavy use of templates, at which point forward declaring becomes hard. For MSVC you can use /P to take a look at how the preprocessed file looks before actual compilation. If you haven't done any forward declaration in your project it would probably be an interesting experience to see how much extra processing needs to be done.

A: 

When you write ...

include "foo.h"

... you thereby instruct a conventional build system "Any time there is any change whatsover in the library file foo.h, discard this compilation unit and rebuild it, even if all that happened to foo.h was the addition of a comment, or the addition of a comment to some file which foo.h includes; even if all that happened was some ultra-fastidious colleague re-balanced the curly braces; even if nothing happened other than a pressured colleague checked in foo.h unchanged and inadvertently changed its timestamp."

Why would you want to issue such a command? Library headers, because in general they have more human readers than application headers, have a special vulnerability to changes that have no impact on the binary, such as improved documentation of functions and arguments or the bump of a version number or copyright date.

The C++ rules allow namespace to be re-opened at any point in a compilation unit (unlike a struct or class) in order to support forward declaration.

Thomas L Holaday