Main question: Why aren't general or even specialized whole program optimizers part of our daily lives?
I started thinking about this after reading SuperCompilers, LLC's White paper, which discusses their method of "supercompiling" or metacompiling a program's source to (generally) achieve a faster version that does the same functionality as the original program. Essentially they step through a program's execution and recompile to the same target language. By doing this, natural optimizations occur; for instance, a general binary search function might be specialized to binary search an array of 100 items, if the input program uses arrays of 100 items frequently.
Partial Evaluation is a perhaps a more narrow type of whole program optimization, where the program's source is reduced/evaluated based on some fixed set of input while leaving unknown input open to be evaluated at runtime. For instance, a general function x ^ y, if given that y = 5, can be reduced to x ^ 5 or perhaps something like (x * x) * (x * x) * x.
(I apologize for my crude descriptions of these two techniques)
Historically whole program optimizations such as the two above would be too memory intensive to perform, but with our machines have gigs of memory (or using something like the cloud), why haven't we seen lots of open source partial evaluators and the like spring up? I have seen some, but I would have thought this would be a regular part of our tool chain.
- Is it fear (programmer fearing the transformation of their code introducing bugs)?
- Is it just not worth it (i.e. for web apps the bottleneck is I/O, and this kind of optimization seems to be saving CPU time)?
- Is this kind of software just that difficult to write?
- Or, is my perception of this just wrong?