views:

1363

answers:

6

A few years ago, I read the Recursive Make Considered Harmful paper and implemented the idea in my own build process. Recently, I read another article with ideas about how to implement non-recursive make. So I have a few data points that non-recursive make works for at least a few projects.

But I'm curious about the experiences of others. Have you tried non-recursive make? Did it make things better or worse? Was it worth the time?

+2  A: 

I know of at least one large scale project (ROOT), which advertises using [powerpoint link] the mechanism described in Recursive Make Considered Harmful. The framework exceeds a million lines of code and compiles quite smartly.


And, of course, all the largish projects I work with that do use recursive make are painfully slow to compile. ::sigh::

dmckee
+6  A: 

I made a half-hearted attempt at my previous job at making the build system (based on GNU make) completely non-recursive, but I ran into a number of problems:

  • The artifacts (i.e. libraries and executables built) had their sources spread out over a number of directories, relying on vpath to find them
  • Several source files with the same name existed in different directories
  • Several source files were shared between artifacts, often compiled with different compiler flags
  • Different artifacts often had different compiler flags, optimization settings, etc.

One feature of GNU make which simplifies non-recursive use is target-specific variable values:

foo: FOO=banana
bar: FOO=orange

This means that when building target "foo", $(FOO) will expand to "banana", but when building target "bar", $(FOO) will expand to "orange".

One limitation of this is that it is not possible to have target-specific VPATH definitions, i.e. there is no way to uniquely define VPATH individually for each target. This was necessary in our case in order to find the correct source files.

The main missing feature of GNU make needed in order to support non-recursiveness is that it lacks namespaces. Target-specific variables can in a limited manner be used to "simulate" namespaces, but what you really would need is to be able to include a Makefile in a sub-directory using a local scope.

EDIT: Another very useful (and often under-used) feature of GNU make in this context is the macro-expansion facilities (see the eval function, for example). This is very useful when you have several targets which have similar rules/goals, but differ in ways which cannot be expressed using regular GNU make syntax.

JesperE
+8  A: 

Let me stress one argument of Miller's paper: When you start to manually resolve dependency relationships between different modules and have a hard time to ensure the build order, you are effectively reimplementing the logic the build system was made to solve in the first place. Constructing reliable recursive make build systems is very hard. Real life projects have many interdependent parts whose build order is non-trivial to figure out and thus, this task should be left to the build system. However, it can only resolve that problem if it has global knowledge of the system.

Furthermore, recursive make build-systems are prone to fall apart when building concurrently on multiple processors/cores. While these build systems may seem to work reliably on a single processor, many missing dependencies go undetected until you start to build your project in parallel. I've worked with a recursive make build system which worked on up to four processors, but suddenly crashed on a machine with two quad-cores. Then I was facing another problem: These concurrency issues are next to impossible to debug and I ended up drawing a flow-chart of the whole system to figure out what went wrong.

To come back to your question, I find it hard to think of good reasons why one wants to use recursive make. The runtime performance of non-recursive GNU Make build systems is hard to beat and, quite the contrary, many recursive make systems have serious performance problems (weak parallel build support is again a part of the problem). There is a paper in which I evaluated a specific (recursive) Make build system and compared it to a SCons port. The performance results are not representative because the build system was very non-standard, but in this particular case the SCons port was actually faster.

Bottom line: If you really want to use Make to control your software builds, go for non-recursive Make because it makes your life far easier in the long run. Personally, I would rather use SCons for usability reasons (or Rake - basically any build system using a modern scripting language and which has implicit dependency support).

Pankrat
+16  A: 

We use a non-recursive GNU Make system in the company I work for. It's based on Miller's paper and especially the "Implementing non-recursive make" link you gave. We've managed to refine Bergen's code into a system where there's no boiler plate at all in subdirectory makefiles. By and large, it works fine, and is much better than our previous system (a recursive thing done with GNU Automake).

We support all the "major" operating systems out there (commercially): AIX, HP-UX, Linux, OS X, Solaris, Windows, even the AS/400 mainframe. We compile the same code for all of these systems, with the platform dependent parts isolated into libraries.

There's more than two million lines of C code in our tree in about 2000 subdirectories and 20000 files. We seriously considered using SCons, but just couldn't make it work fast enough. On the slower systems, Python would use a couple of dozen seconds just parsing in the SCons files where GNU Make did the same thing in about one second. This was about three years ago, so things may have changed since then. Note that we usually keep the source code on an NFS/CIFS share and build the same code on multiple platforms. This means it's even slower for the build tool to scan the source tree for changes.

Our non-recursive GNU Make system is not without problems. Here are some of biggest hurdles you can expect to run into:

  • Making it portable, especially to Windows, is a lot of work.
  • While GNU Make is almost a usable functional programming language, it's not suitable for programming in the large. In particular, there are no namespaces, modules, or anything like that to help you isolate pieces from each other. This can cause problems, although not as much as you might think.

The major wins over our old recursive makefile system are:

  • It's fast. It takes about two seconds to check the entire tree (2k directories, 20k files) and either decide it's up to date or start compiling. The old recursive thing would take more than a minute to do nothing.
  • It handles dependencies correctly. Our old system relied on the order subdirectories were built etc. Just like you'd expect from reading Miller's paper, treating the whole tree as a single entity is really the right way to tackle this problem.
  • It's portable to all of our supported systems, after all the hard work we've poured into it. It's pretty cool.
  • The abstraction system allows us to write very concise makefiles. A typical subdirectory which defines just a library is just two lines. On line gives the name of the library and the other lists the libraries this one depends on.

Regarding the last item in the above list. We ended up implementing a sort of macro expansion facility within the build system. Subdirectory makefiles list programs, subdirectories, libraries, and other common things in variables like PROGRAMS, SUBDIRS, LIBS. Then each of these are expanded into "real" GNU Make rules. This allows us to avoid much of the namespace problems. For example, in our system it's fine to have multiple source files with the same name, no problem there.

In any case, this ended up being a lot of work. If you can get SCons or similar working for your code, I'd advice you look at that first.

Ville Laurikari
I don't think any build-system out there can really match a properly written non-recursive GNU make system.
JesperE
+3  A: 

I agree with the statements in the refered article, but it took me a long time to find a good template which does all this and is still easy to use.

Currenty I'm working on a small research project, where I'm experimenting with continuous integration; automatically unit-test on pc, and then run a system test on a (embedded) target. This is non-trivial in make, and I've searched for a good solution. Finding that make is still a good choice for portable multiplatform builds I finally found a good starting point in http://code.google.com/p/nonrec-make

This was a true relief. Now my makefiles are

  • very simple to modify (even with limited make knowledge)
  • fast to compile
  • completely checking (.h) dependencies with no effort

I will certainly also use it for the next (big) project (assuming C/C++)

Adriaan
+4  A: 

After reading the RMCH paper, I set out with the goal of writing a proper non-recursive Makefile for a small project I was working on at the time. After I finished, I realized that it should be possible to create a generic Makefile "framework" which can be used to very simply and concisely tell make what final targets you would like to build, what kind of targets they are (e.g. libraries or executables) and what source files should be compiled to make them.

After a few iterations I eventually created just that: a single boilerplate Makefile of about 150 lines of GNU Make syntax that never needs any modification -- it just works for any kind of project I care to use it on, and is flexible enough to build multiple targets of varying types with enough granularity to specify exact compile flags for each source file (if I want) and precise linker flags for each executable. For each project, all I need to do is supply it with small, separate Makefiles that contain bits similar to this:

TARGET := foo

TGT_LDLIBS := -lbar

SOURCES := foo.c baz.cpp

SRC_CFLAGS   := -std=c99
SRC_CXXFLAGS := -fstrict-aliasing
SRC_INCDIRS  := inc /usr/local/include/bar

A project Makefile such as the above would do exactly what you'd expect: build an executable named "foo", compiling foo.c (with CFLAGS=-std=c99) and baz.cpp (with CXXFLAGS=-fstrict-aliasing) and adding "./inc" and "/usr/local/include/bar" to the #include search path, with final linking including the "libbar" library. It would also notice that there is a C++ source file and know to use the C++ linker instead of the C linker. The framework allows me to specify a lot more than what is shown here in this simple example.

The boilerplate Makefile does all the rule building and automatic dependency generation required to build the specified targets. All build-generated files are placed in a separate output directory hierarchy, so they're not intermingled with source files (and this is done without use of VPATH so there's no problem with having multiple source files that have the same name).

I've now (re)used this same Makefile on at least two dozen different projects that I've worked on. Some of the things I like best about this system (aside from how easy it is to create a proper Makefile for any new project) are:

  • It's fast. It can virtually instantly tell if anything is out-of-date.
  • 100% reliable dependencies. There is zero chance that parallel builds will mysteriously break, and it always builds exactly the minimum required to bring everything back up-to-date.
  • I will never need to rewrite a complete makefile again :D

Finally I'd just mention that, with the problems inherent in recursive make, I don't think it would have been possible for me to pull this off. I'd probably have been doomed to rewriting flawed makefiles over and over again, trying in vain to create one that actually worked properly.

Dan Moulding
And will you show us your 150 lines base makefile?
zxcat
@zxcat: See http://github.com/dmoulding/boilermake. To use it, you must supply main.mk (and any other submakefiles that you want to provide). See the README and MANUAL files for more information. The test-app directory has a sample main.mk and other submakefiles that build a simple and silly little test program. Have fun.
Dan Moulding