I think of metaprogamming as "programs that write (or modify) other programs".
(Another answer said "factories that make factories", nice analogy).
People find all sorts of uses for this: customizing applications, generating boilerplate code,
optimizing a program for special circumstances, implementing DSLs, inserting code to handle
orthogonal design issues ("aspects") ...
What's remarkable is how many different mechanisms have been invented to do this piecemeal:
text-templates, macros, preprocessor conditionals, generics, C++-templates, aspects, reflection,...
And usually some of these mechanisms are built into some langauges, and other mechanisms
into other languages, and most langauges have no metaprogramming support at all.
This scattershot distribution of capabilities means that you might be able to do some
type of metaprogramming in one language, but not in another. That's aggravating :-}
An observation that I have been following to the hilt is that one can build generic
metaprogramming machinery that works with any language in the form of
program transformations.
A program transformation is a parameterized pattern: "if you see this syntax, replace it by that syntax".
One transformation by itself generally isn't impressive, but dozens or hundreds can make
spectacular changes to code. Because (sophisticated) program transformations can in
effect simulate a Turing machine, they can carry out arbitrary code changes, including
all those point-wise techniques you find scatter-shotted about.
A tool that accepts langage definitions. language-specific transformations and generates
another to apply those transformations is a meta-metaprogramming tool:
a program to write "programs that write programs".
The value is that you can apply such tool to carry out wide varieties of changes
to arbitrary code. And, you don't need the langauge design committee to realize that you
want a particular kind of metaprogramming support, and hurry up to provide it
so you can get on with your job today.
An interesting lesson is that such machinery needs strong program analysis (symbol
tables, control and data flow analysis, etc.)
support to help it focus on where problems are in the code, so that metaprogramming
machinery can do something at that point (a very weak kind of example of this are
point-cut specifications in aspects, that say "make changes at places that look like this").
The OP asked for specific examples of where metaprogramming was applied.
We've used our "meta"-metaprogramming tool (DMS Software Reengineering Toolkit) to carry out the following activities on large code bases automatically:
- Language Migration
- Implementing Test Coverage and Profilers
- Implementing Clone Detection
- Massive architecture reengineering
- Code generation for factory control
- SOAization of embedded network controllers
- Architecture extraction for mainframe software
- Generation of vector SIMD instructions from array computations
across many languages, including Java, C#, C++, PHP, ...
The OP also asked, "Why was this better than the alternative?"
The answer has to do with scale, time, and accuracy.
For large applications, the sheer size of the code base means you don't have the resources
or the time to make such analyses or changes by hand.
For code generation or optimization tasks, you might be able to do
it by hand, but the tools can do it much faster and more accurately.
In essense, these tools do what human beings simply cannot.
It is worth noting that the tools have no creativity; you still
need humans to determine what to have them do, e.g., to decide
what the task is (see above list for examples) and determine
how to define the analyses/transformations to achieve the effect.
You still need meta-programmers. However, when a meta programmer
arms such a tool with the right knowledge, the resulting code can
appear to be built by an incredibly fast, creative, expert coder.