views:

1010

answers:

6

Is anyone using JIT tricks to improve the runtime performance of statically compiled languages such as C++? It seems like hotspot analysis and branch prediction based on observations made during runtime could improve the performance of any code, but maybe there's some fundamental strategic reason why making such observations and implementing changes during runtime are only possible in virtual machines. I distinctly recall overhearing C++ compiler writers mutter "you can do that for programs written in C++ too" while listening to dynamic language enthusiasts talk about collecting statistics and rearranging code, but my web searches for evidence to support this memory have come up dry.

+4  A: 

visual studio has an option for doing runtime profiling that then can be used for optimization of code.

"Profile Guided Optimization"

Keith Nicholas
+4  A: 

Microsoft Visual Studio calls this "profile guided optimization"; you can learn more about it at MSDN. Basically, you run the program a bunch of times with a profiler attached to record its hotspots and other performance characteristics, and then you can feed the profiler's output into the compiler to get appropriate optimizations.

Crashworks
+12  A: 

Profile guided optimization is different than runtime optimization. The optimization is still done offline, based on profiling information, but once the binary is shipped there is no ongoing optimization, so if the usage patterns of the profile-guided optimization phase don't accurately reflect real-world usage then the results will be imperfect, and the program also won't adapt to different usage patterns.

You may be interesting in looking for information on HP's Dynamo, although that system focused on native binary -> native binary translation, although since C++ is almost exclusively compiled to native code I suppose that's exactly what you are looking for.

You may also want to take a look at LLVM, which is a compiler framework and intermediate representation that supports JIT compilation and runtime optimization, although I'm not sure if there are actually any LLVM-based runtimes that can compile C++ and execute + runtime optimize it yet.

Whatever
Thanks for the link to Dynamo.
Thomas L Holaday
+2  A: 

I did that kind of optimization quite a lot in the last years. It was for a graphic rendering API that I've implemented. Since the API defined several thousand different drawing modes as general purpose function was way to slow.

I ended up writing my own little Jit-compiler for a domain specific language (very close to asm, but with some high level control structures and local variables thrown in).

The performance improvement I got was between factor 10 and 60 (depended on the complexity of the compiled code), so the extra work paid off big time.

On the PC I would not start to write my own jit-compiler but use either LIBJIT or LLVM for the jit-compilation. It wasn't possible in my case due to the fact that I was working on a non mainstream embedded processor that is not supported by LIBJIT/LLVM, so I had to invent my own.

Nils Pipenbrinck
++ Another way I've seen this done is crude but effective. Allocate a block of stack, and generate on-the-fly a specialized machine-language routine, and call it. I'm a big fan of code generation.
Mike Dunlavey
+1  A: 

Reasonable question - but with a doubtful premise.

As in Nils' answer, sometimes "optimization" means "low-level optimization", which is a nice subject in its own right.

However, it is based on the concept of a "hot-spot", which has nowhere near the relevance it is commonly given.

Definition: a hot-spot is a small region of code where a process's program counter spends a large percentage of its time.

If there is a hot-spot, such as a tight inner loop occupying a lot of time, it is worth trying to optimize at the low level, if it is in code that you control (i.e. not in a third-party library).

Now suppose that inner loop contains a call to a function, any function. Now the program counter is not likely to be found there, because it is more likely to be in the function. So while the code may be wasteful, it is no longer a hot-spot.

There are many common ways to make software slow, of which hot-spots are one. However, in my experience, that is the only one of which most programmers are aware, and the only one to which low-level optimization applies.

See this.

Mike Dunlavey
Do you give any creedance to the notion that (some) JIT techniques (broadly speaking) make Java, C#, etc. programs faster? If so, do you know of any efforts to apply those techniques to statically-compiled languages, or, alternatively, a reason why statically compiled languages cannot benefit from these techniques?
Thomas L Holaday
@Thomas: Faster than what? Interpreted bytecode? Of course. Other static compiled code? Marginal, maybe, if use can be made of run-time knowledge. Regardless, it's a case of looking for keys under the street lamp. The keys are higher performance, and the street lamp is low-level optimization.
Mike Dunlavey
+1  A: 

I believe LLVM attempts to do some of this. It attempts to optimize across the whole lifetime of the program (compile-time, link-time, and run-time).

Zifre