views:

631

answers:

6

One problem in large C++ projects can be build times. There is some class high up in your dependency tree which you would need to work on, but usually you avoid doing so because every build takes a very long time. You don't necessarily want to change its public interface, but maybe you want to change its private members (add a cache-variable, extract a private method, ...). The problem you are facing is that in C++, even private members are declared in the public header file, so your build system needs to recompile everything.

What do you do in this situation?

I have sketched two solutions which I know of, but they both have their downsides, and maybe there is a better one I have not yet thought of.

+8  A: 

The pimpl pattern:

In your header file, only declare the public methods and a private pointer (the pimpl-pointer or delegate) to a forward declared implementation class.

In your source, declare the implementation class, forward every public method of your public class to the delegate, and construct an instance of your pimpl class in every constructor of your public class.

Plus:

  • Allows you to change the implementation of your class without having to recompile everything.
  • Inheritance works well, only the syntax becomes a little different.

Minus:

  • Lots and lots of stupid method bodies to write to do the delegation.
  • Kind of awkward to debug since you have tons of delegates to step through.
  • An extra pointer in your base class, which might be an issue if you have lots of small objects.
Tobias
+2  A: 

Using inheritance:

In your header, declare the public methods as pure virtual methods and a factory.

In your source, derive an implementation class from your interface and implement it. In the implementation of the factory return an instance of the implementation.

Plus:

  • Allows you to change the implementation of your class without having to recompile everything.
  • Easy and foolproof to implement.

Minus:

  • Really awkward to define a (public) derived instance of the public base class which should inherit some of the methods of the (private) implementation of the public base.
Tobias
Another plus is that you can easily mock the class for unit testing if you have an interface
1800 INFORMATION
A: 

You can use a forward declaration for class A that is referred to by pointer in another class B. You can then include class's A header file in class B's implementation file rather than its header file. That way, changes you make to class A will not affect source files that include class B's header file. Any class that wants to access class A's members will have to include class A's header file.

Remy Lebeau - TeamB
This is correct, and a best practice in any case. Sometimes it doesn't seem enough, though (when you have lots of clients who need to use the class).
Tobias
+4  A: 

John Lakos' Large Scale C++ Software Design is an excellent book that addresses the challenges involved in building large C++ projects. The problems and solutions are all grounded in reality, and certainly the above problem is discussed at length. Highly recommended.

Greg Hewgill
Agreed--a classic, if a bit of a slow read. BTW, I heard that he was going to release a second edition--anybody heard an update on this? +1
Drew Hall
Drew - do you remember where you heard this (about a 2nd edition)? Also, I'd seen that Lakos was going to have a book in 2006 about Scalable C++, but it seems to have been stillborn (or at least back-burnered.) It's a shame because the book (LSCSD) really addresses a need, I was hoping after 13 years an update would be released.
Dan
Dan--I wish I could remember--I thought it was on the AW website but I don't see it anymore. Maybe even Amazon? Maybe just wishful thinking on my part...
Drew Hall
A: 

Refactoring and use pimpl/handle-body idiom, use pure virtual interfaces to hide implementation detail seems to be the popular answer. One should consider compile time and developer productivity when designing large systems. But what if you're working on existing large C++ system with no unit test coverage? Refactoring is usually out of the question.

What I usually do when I don't want the compiler to compile the world after I touched some common header files is to have a makefile/script to compile only the files I know need recompiling. For example, if I'm adding a non-virtual private function to a class, only the class's cpp file needs to be recompiled even when its header file is included by a hundred other files. Before I leave for the day, I kick off a clean build to rebuild the world.

Shing Yip
Do you have a generic target in your makefile or do you manually compile and link?
Tobias
A: 

None.

I see the point in using one, but I think the following arguments mitigate that point in many scenarios:

  1. Clarity comes first. If compromising clarity for runtime speed must be considered twice, what about compromising clarity for compile time speed?
  2. Private members shouldn't change so often.
  3. Usually, it doesn't take that long to rebuild everything.
  4. Faster tools will come in the future, so the compile speed problem will be automatically mitigated. Your code will not become automatically clearer.
  5. You should rebuild frequently anyway.
  6. Have you tried Incredibuild?

Of course in the end this is an economic decision. If the weight of "3" is important in your project and for some reason "6" cannot apply, then go ahead: you will win more from using these templates than you lose.

Daniel Daranas
Your point is certainly valid, this kind of thing should not be done prematurely as you should not optimize code before profiling.I disagree with point 2 - software is not static, but constantly evolving. If anything, you should try to keep your public interface constant, but even that changes quickly when you refactor stuff.I also disagree with 4. Faster tools will come, but in the meantime, developers will have added tons of loc.Point 5 is absolutely correct, and is part of the reason why you would want to use such a pattern.
Tobias