views:

72

answers:

2

I'm writing a compiler, and I'm looking for resources on optimization. I'm compiling to machine code, so anything at runtime is out of the question.

What I've been looking for lately is less code optimization and more semantic/high-level optimization. For example:

free(malloc(400)); // should be completely optimized away

Even if these functions were completely inlined, they could eventually call OS memory functions which can never be inlined. I'd love to be able to eliminate that statement completely without building special-case rules into the compiler (after all, malloc is just another function).

Another example:

string Parenthesize(string str) {
    StringBuilder b; // similar to C#'s class of the same name
    foreach(str : ["(", str, ")"])
        b.Append(str);
    return b.Render();
}

In this situation I'd love to be able to initialize b's capacity to str.Length + 2 (enough to exactly hold the result, without wasting memory).

To be completely honest, I have no idea where to begin in tackling this problem, so I was hoping for somewhere to get started. Has there been any work done in similar areas? Are there any compilers that have implemented anything like this in a general sense?

+1  A: 

The Broadway framework might be in the vein of what you're looking for. Papers on "source-to-source transformation" will probably also be enlightening.

Novelocrat
Thanks for the link. I love the point about catching library mis-use at compile time.
zildjohn01
+2  A: 

To do an optimization across 2 or more operations, you have to understand the algebraic relationship of those two operations. If you view operations in their problem domain, they often have such relationships.

Your free(malloc(400)) is possible because free and malloc are inverses in the storage allocation domain. Lots of operations have inverses and teaching the compiler that they are inverses, and demonstrating that the results of one dataflow unconditionally into the other, is what is needed. You have to make sure that your inverses really are inverses and there isn't a surprise somewhere; a/x*x looks like just the value a, but if x is zero you get a trap. If you don't care about the trap, it is an inverse; if you do care about the trap then the optimization is more complex: (if (x==0) then trap() else a) which is still a good optimization if you think divide is expensive.

Other "algebraic" relationships are possible. For instance, there are may idempotent operations: zeroing a variable (setting anything to the same value repeatedly), etc. There are operations where one operand acts like an identity element; X+0 ==> X for any 0. If X and 0 are matrices, this is still true and a big time savings.

Other optimizations can occur when you can reason abstractly about what the code is doing. "Abstract interpretation" is a set of techniques for reasoning about values by classifying results into various interesting bins (e.g., this integer is unknown, zero, negative, or positive). To do this you need to decide what bins are helpful, and then compute the abstract value at each point. This is useful when there are tests on categories (e.g., "if (x<0) { ... " and you know abstractly that x is less than zero; you can them optimize away the conditional.

Another way is to define what a computation is doing symbolically, and simulate the computation to see the outcome. That is how you computed the effective size of the required buffer; you computed the buffer size symbolically before the loop started, and simulated the effect of executing the loop for all iterations. For this you need to be able to construct symbolic formulas representing program properties, compose such formulas, and often simplify such formulas when they get unusably complex (kinds of fades into the abstract interpretation scheme). You also want such symbolic computation to take into account the algebraic properties I described above. Tools that do this well are good at constructing formulas, and program transformation systems are often good foundations for this. One source-to-source program transformation system that can be used to do this is the DMS Software Reengineering Toolkit.

What's hard is to decide which optimizations are worth doing, because you can end of keeping track of vast amounts of stuff, which may not pay off. Computer cycles are getting cheaper, and so it makes sense to track more properties of the code in the compiler.

Ira Baxter
That will definitely keep me busy for a while... thanks!
zildjohn01