views:

383

answers:

7

I am not aware of any self-improving compiler, but then again I am not much of a compiler-guy.

Is there ANY self-improving compiler out there?

Please note that I am talking about a compiler that improves itself - not a compiler that improves the code it compiles.

Any pointers appreciated!

Side-note: in case you're wondering why I am asking have a look at this post. Even if I agree with most of the arguments I am not too sure about the following:

We have programs that can improve their code without human input now — they’re called compilers.

... hence my question.

+7  A: 

While it is true that compilers can improve code without human interference, however, the claim that "compilers are self-improving" is rather dubious. These "improvements" that compilers make are merely based on a set of rules that are written by humans (cyborgs anyone?). So the answer to your question is : No.

On a side note, if there was anything like a self improving compiler, we'd know... first the thing would improve the language, then its own code and finally, it would modify its code to become a virus and make all developers use it... and then finally we'd have one of those classic computer-versus-humans-last-hope-for-humanity kind of things... so ... No.

Aviral Dasgupta
Ah -- the irony ... your post is even tagged "singularity"
Aviral Dasgupta
With due respect: bullshit. See my answer.
MaD70
"then it would become a virus to make all devs use it"... so Java, then?
jrockway
Ah! Thanks jrockway for the chuckle.
MaD70
I don't know about you, but *my* compilers attend night classes to learn underwater basket-weaving!
Donal Fellows
+3  A: 

25 years of programming and I have never heard of such a thing (unless you're talking about compilers that auto download software updates!).

dicroce
A: 

I'm not sure if it qualifies, but the Java HotSpot compiler improves code at runtime using statistics.

But at compile time? How will that compiler know what's deficient and what's not? What's the measure of goodness?

There are plenty of examples of people using genetic techniques to pit algorithms against each other to come up with "better" solutions. But these are usually well-understood problems that have a metric.

So what metrics could we apply at compile time? Minimum size of compiled code, cyclometric complexity, or something else? Which of these is meaningful at runtime?

duffymo
Firstly : that's not self improving is it? Secondly : If someone thinks that that is an application of AI, it is not. What it simply does is checks for performance bottlenecks and tries to improve those by compromising on other things.
Aviral Dasgupta
Thanks, I see your point but I am talking about a compiler that improves itself - not the code that it compiles.
JohnIdol
+2  A: 

A self improving compiler would, by definition, have to have self modifying code. If you look around, you can find examples of people doing this (self modifying code). However, it's very uncommon to see - especially on projects as large and complex as a compiler. And it's uncommon for the very good reason that it's ridiculously hard (ie, close to impossible) to guarantee correct functionality. A lot of coders who think they're smart (especially Assembly coders) play around with this at one point or another. The ones who actually are smart mostly move out of this phase. ;)

Russell Newquist
+3  A: 

Not yet practically implemented, to my knowledge, but yes, the theory is there:

  • Goedel machines: self-referential universal problem solvers making provably optimal self- improvements.
MaD70
so the answer is: NO - there's no such a thing out there. Time travel is possible in theory but I don't see anyone from the future :) - All jokes aside, thanks for the link. Any other examples you can think of or Goedel machines are the only theorized examples of such a thing?
JohnIdol
They are state-of-the-art to my knowledge, previous work was done by Marcus Hutter (http://www.hutter1.net/), a student of Schmidhuber. But I need to be clear: Goedel machines are not "theorized" in the sense that there are vague statements that they are possible; no, in Schmidhuber's work there is a clear plan to build (program) one. What is left is engineering, which is not necessary easy: in particular it requires an axiomatic model of the hardware performance where it run. With current crappy CPUs is unrealistic a deterministic performance model, I was thinking to let the program...
MaD70
.. monitor itself (via hardware performance counters), learn and perfect incrementally an axiomatic probabilistic performance model. But this is pure speculation.
MaD70
From the linked page - "The searcher systematically and efficiently tests computable proof techniques (programs whose outputs are proofs) until it finds a provably useful, computable self-rewrite. We show that such a self-rewrite is globally optimal - no local maxima! - since the code first had to prove that it is not useful to continue the proof search for alternative self-rewrites."How long does that typically take to run? That sounds alot like it would be a race against the heat death of the universe. Especially since existing compiler theory is pretty advanced already.
Adam Luchjenbroers
No: time is equally divided between doing the actual task and searching for better solutions. Imagine a robot executing a plan - if there is a stringent time-limit, at worst it will complete the task executing a less efficient plan. So, if there is a time-limit the actual task will be completed anyway (if both the plan is computable and doable in that time-limit, of course). Also, if one have a good starting point there is nothing that prevent to start from there - possibly a rewrite will discover a better version, not (yet) computable by traditional means.
MaD70
@MaD70: Didn't find any reference to Goedel machines on Hutter's page (I may not have known exactly what to look for), but it looks impossible to me. It isn't possible to algorithmically find the simplest program equivalent to program X. If it were, we could, say, write a program that would test Goldbach's conjecture and either print a counterexample or never stop, and reduce it to either a simple infinite loop or a print statement. Any sort of exhaustive search would race against proton decay and lose.
David Thornley
@David Thornley: the link points directly to Goedel machines home page on Schmidhuber's site: http://www.idsia.ch/~juergen/goedelmachine.htmlRead a paper on *that* page: he does not wrote that a Goedel machine will find "the simplest program equivalent to program X". A Goedel machine will rewrite itself incrementally and it only assures, by proof, that each rewrite will be better than the preceding version. Eventually, it will find by chance, in the time allotted, a global optimal rewrite, but this is not guaranteed, of course, and you can never be sure that current rewrite is optimal.
MaD70
@MaD70: Thanks, found a preprint there. It appears to me that the machine will limit itself to provably correct improvements, which has its own issues. Note that a deterministic CPU performance model is almost certainly slower than a real model, which means that it's got a performance deficit to begin with. It doesn't look like it'll be practical to me, although we're likely to learn stuff by trying it.
David Thornley
@David Thornley: you are welcome. On an *axiomatic* deterministic CPU performance model: you need to understand it NOT as "emulation" but as instruction cycles counting, as in the past. So given a rewrite, you can calculate its performance by a simple addition of each instruction cycle of the rewritten program. But see next comment.
MaD70
@youngsters: in the past, CPU datasheets declared for each instruction how many cycles were needed to execute them (1 cycle = 1 tick of clock). CPUs performances were much more predictable in that time. Of course, nowadays we have a completely different picture. For that reason in the second comment to this answer I alluded to an axiomatic **probabilistic** performance model, automatically constructed (machine learning again).
MaD70
+4  A: 

MilepostGCC is a MachineLearning compiler, which improve itself with time in the sense that it is able to change itself in order to become "better" with time. A simpler iterative compilation approach is able to improve pretty much any compiler.

Davide
A: 

Well, there is JIT (just in time) techniques. One could argue that a compiler with some JIT optimizations might re-adjust itself to be more efficient with the program it is compiling ???

Francisco Garcia
I am talking more of improvements to the compiler source-code by the compiler itself
JohnIdol