views:

1079

answers:

9

I've worked with a number of C projects during my programming career and the header file structures usually fall into one of these two patterns:

  1. One header file containing all function prototypes
  2. One .h file for each .c file, containing prototypes for the functions defined in that module only.

The advantages of option 2 are obvious to me - it mokes it cheaper to share the module between multiple projects and makes dependencies between modules easier to see.

But what are the advantages of option 1? It must have some advantages otherwise it would not be so popular.


Related to question 181921.

This question would apply to C++ as well as C, but I have never seen #1 in a C++ project.

Placement of #defines, structs etc. also varies but for this question I would like to focus on function prototypes.

+2  A: 

Option 1 allows for having all the definitions in one place so that you have to include/search just one file instead of having to include/search many files. This advantage is more obvious if your system is shipped as a library to a third party - they don't care much about your library structure, they just want to be able to use it.

sharptooth
even in that case, I think you would be better off using separate includes and creating one header that includes all the others.
Dolphin
If the project is of any size, you should have more, rather than less, headers. One size does not sensibly fit all.
Jonathan Leffler
+7  A: 

I think the prime motivation for #1 is ... laziness. People think it's either too hard to manage the dependencies that splitting things into separate files can make more obvious, and/or think it's somehow "overkill" to have separate files for everything.

It can also, of course, often be a case of "historical reasons", where the program or project grew from something small, and no-one took the time to refactor the header files.

unwind
java can be guilty of this too, e.g., "import java.io.*;"
JustJeff
+1  A: 

When you have a very large project with hundreds/thousands of small header files, dependency checking and compilation can significantly slow down as lots of small files must be opened and read. This issue can be often solved by using precompiled headers.

Miroslav Bajtoš
Only if you're stuck with a 486 at 25Mhz and a 1000 module files project... not an issue in this day and age.
jpinto3912
Having lots of small header files actually can *improve* your compile time, during debug/fix cycles, if you've got a proper Makefile. One big header file will lead to recompilation of everything when the header file changes.
Dan Moulding
A: 

In C++ you would definitely want one header file per class and use pre-compiled headers as mentioned above.

One header file for an entire project is unworkable unless the project is extremely small - like a school assignment

Maggie
+3  A: 

There is also I believe a 3rd option: each .c has its own .h, but there is also one .h which includes all other .h files. This brings the best of both worlds at the expense of keeping a .h up to date, though that could done automatically.

With this option, internally you use the individual .h files, but a 3rd party can just include the all-encompassing .h file.

freespace
That does *not* bring the best of both worlds. It'll bring unnecessary compilation dependencies when all files includes all headers.
Johann Gerell
Perhaps you did not see the edit, but the all-encompassing .h file should not be used internally - it is only for the convenience of 3rd parties. Apple does something like this. You include say, Foundation/Foundation.h and it will pull in all the other headers under Foundation/. You can also chose not to do that, and include a specific header under Foundation/I would not expect the all-encompassing .h to influence compilation at all, since it should only be used by a 3rd party. That is to say, it is provided by the project, but not used by the project.
freespace
+2  A: 
  • 1 is just unnecessary. I can't see a good reason to do it, and plenty to avoid it.

Three rules for following #2 and have no problems:

  • start EVERY header file with a

    #ifndef _HEADER_Namefile
    #define _HEADER_Namefile_
    

end the file with

    #endif

That will allow you to include the same header file multiple times on the same module (innadvertely may happen) without causing any fuss.

  • you can't have definitions on your header files... and that's something everybody thinks he/she knows, about function prototypes, but almost ever ignores for global variables. If you want a global variable, which by definition should be visible outside it's defining C module, use the extern keyword:

    extern unsigned long G_BEER_COUNTER;
    

which instructs the compiler that the G_BEER_COUNTER symbol is actually an unsigned long (so, works like a declaration), that on some other module will have it's proper definition/initialization. (This also allows the linker to keep the resolved/unresolved symbol table.) The actual definition (same statement without extern) goes in the module .c file.

  • only on proven absolute necessity do you include other headers within a header file. include statements should only be visible on .c files (the modules). That allows you to better interpret the dependecies, and find/resolve issues.
jpinto3912
@jpinto3912: The normal convention is to put the #define immediately after the #ifndef. What you've got will redefine _HEADER_Namefile_ every time the file is included, which is legal, but best avoided IMHO.
Steve Melnikoff
@Steve, yeah, point well taken... i fought the editing so hard (and still looks like crap), that I missed that. Thanks!
jpinto3912
+4  A: 

Another reason for using a different .h for every .c is compile time. If there is just one .h (or if there are more of them but you are including them all in every .c file), every time you make a change in the .h file, you will have to recompile every .c file. This, in a large project, can represent a valuable amount of time being lost, which can also break your workflow.

Artur Soler
This is probably the single biggest reason to *not* do #1. You might as well not even bother with a Makefile and just have a shell script that always recompiles *everything*.
Dan Moulding
A: 

That depends on how much functionality is in one header/source file. If you need to include 10 files just to, say, sort something, it's bad.

For example, if I want to use STL vectors I just include and I don't care what internals are necessary for vector to be used. GCC's includes 8 other headers -- allocator, algobase, construct, uninitialized, vector and bvector. It would be painful to include all those 8 just to use vector, would you agree?

BUT library internal headers should be as sparse as possible. Compilers are happier if they don't include unnecessary stuff.

HMage
A: 

I would recommend a hybrid approach: making a separate header for each component of the program which could conceivably be used independently, then making a project header that includes all of them. That way, each source file only needs to include one header (no need to go updating all your source files if you refactor components), but you keep a logical organization to your declarations and make it easy to reuse your code.

R..
I should clarify: only source files which deal with integrating the project as a whole should use the project header that includes all the others. Components which are independent or only depend on a couple other components should just include the headers for what they use. Otherwise you don't really get much reusability benefit.
R..