views:

391

answers:

2

I was reading the transcription of Steve Yegge's Dynamic Languages Strike Back presentation, when I noticed this comment when he begins to discuss trace trees:

I'll be honest with you, I actually have two optimizations that couldn't go into this talk that are even cooler than this because they haven't published yet. And I didn't want to let the cat out of the bag before they published. So this is actually just the tip of the iceberg.

What are the optimizations he was referring to?

Update

Several days ago, I asked this question in a comment on the article. However, comment moderation is turned on (for good reasons), so it hasn't appeared yet.

Update

It has been a couple weeks since I first tried to reach the author. Does anyone else know another way to contact him?

+1  A: 

You can watch that video from youtube under the StanfordUniversity channel: http://www.youtube.com/watch?v=tz-Bb-D6teE You can add comments there too. Maybe someone will come to your rescue.

jase21
+2  A: 

Take a look at this: http://blog.stackoverflow.com/2009/04/podcast-50/

EDIT: Difficult to find specific (confirmed) references however, this paper perhaps gives some information regarding this: http://people.mozilla.org/~dmandelin/tracemonkey-pldi-09.pdf and this blog post which appears related: http://andreasgal.wordpress.com/2008/08/22/tracing-the-web/

Might not be related as it is a Microsoft research paper from March 2010: http://research.microsoft.com/pubs/121449/techreport2.pdf

Pure speculative on my part but it appears (at least to me) that there are two major forms of performance, that at the developer level (IDE) and that at the compiler level which this subject of trace trees addresses hence the "continuous optomization" during execution to get the trace inline for the hot spots. This then leads me quickly to areas of optomization related to multi-cores and how to utilize the trace tree somehow in that regard (multi-core environments). Interesting stuff considering the currently theoretical non-static type speed speculation as compared to the speed winners in static type utilized in current C and the performance potential to be gained. I recall a discussion I had with a hardware engineer years ago (1979) where we speculated that if we could just capture the 'hot' execution paths we could get a huge gain in performance by keeping it "ready to run" in situ somehow - this was way prior to the work at HP in this regard (1999?) and unfortunatly we did not get further than the discussion stage due to other commitments. (I am rambling here I think...:)

OR, was this just related to the GO language? hard to tell in some respects.

Mark Schultheiss
I haven't listened to the entire thing, but the transcript does not appear to make any reference to this subject. :(
Adam Paynter