tags:

views:

454

answers:

6

Linus Torvalds used to work for a processor company called Transmeta. The processor they made was a RISC based object in the core. If I remember correctly, the idea was that the core ran an arbitrary and upgradable "processor emulation layer" (could be x86, powerpc etc), which translated the high level opcodes into the RISC core instruction set.

What happened to this idea, and what in your opinion were the pros, cons and situations where such approach could have had an advantage (in terms of programming)?

+2  A: 

Obvious pros:

  • Ability to run any OS (just switch the processor emulation to what is needed)
  • Possibility (with kernel support of course) of running binaries for different architectures on the same processor/OS without software support.

Obvious con:

  • Extra emulation layer == more overhead == faster processor needed to get equivalent performance for EVERYTHING.
Matthew Scharley
+2  A: 

For one thing most CISC processors internally translate their opcodes to uops micro-ops which are similar to RISC ops. Pipelining and multiple cores have closed the gap on RISC processors to the point where it's a very small difference between them, if any. If you need cross compatibility from C source or another assembly front end you can use LLVM. http://llvm.org/

+4  A: 

The company did not do as well as they expected, and were eventually acquired by Novafora for it's power-saving technology. ( http://www.novafora.com/pr01-28-09.html )

From all accounts that I am aware of, the technology simply did not compete with existing systems. They fell far short of their performance numbers. Also, while it may have been possible to put another translator on top of their VLIW design, I'm not aware of any products they produced that did. I don't remember the Crusoe chip being able to accept an alternative "translation" microcode download.

I personally owned a device that used a Crusoe processor, and while it certainly delivered on battery life, the performance of the device was dismal. Some of the blame could probably be leveled on the special version of Windows it used, but it was still slow.

At best, it was good for portable remote desktop.

IMHO, the technology has the same benefits as software VM's like .Net and the JVM:

  • The upside is that you can probably accelerate the code faster with a hardware solution (like IBM does with it's Java accelerator processors) than pure software JIT.
  • The downside is that you never get the raw performance that processors executing native code get.

From some perspectives you can think of modern x86 chips as code morphing, although as very specialized ones. They translate the x86 architecture into a more efficient RISC-like subinstruction set, and then execute those.

Another example of this sort of technology could be FPGAs which can be programmed to emulate on a circuit level various kinds of processors or raw circuits. I believe that some Cray systems can come with "accelerator nodes" of this sort.

Christopher
+1  A: 

Most modern processors actually implement their instruction sets using microcode. There are many reasons for this, including compatibility concerns, but there are also other reasons.

The distinction between what is "hardware" and what is "software" is actually hard to make. Modern virtual machines such as the JVM or CIL (.NET) might as well be implemented in hardware, but that would probably just be done using microcode anyway.

One of the reasons for having several layers of abstraction in a system is that the programmers/engineers do not have to think about irrelevant details when they are working at a higher level.

The operating system and system libraries also provide additional abstraction layers. But having these layers only makes the system "slower" if one does not need the features they provide (i.e. the thread scheduling done by the OS). It is no easy task to get your own program-specific scheduler to beat the one in the Linux kernel.

Jørgen Fogh
+1  A: 

I would say that cost reductions come with quantity, so something like the Transmeta chip has to sell a lot of volume before it can compete on price with existing high volume x86 chips.

If I recall, the point of the Transmeta chip was that it was low power. Having less silicon gates to flip back and forth every clock cycle saves energy. The code morphing was so you could run a complex instruction set (CISC) on a low power RISC chip.

Transmeta's first processor, the Crusoe, didn't do very well due to problems even running benchmark software. Their second processor, the Efficeon, did manage to use less power than the Intel Atom (in the same performance category), and perform better than the Centrino in the same power envelope.

Now, looking at it from the software and flexibility standpoint like you are, Code Morphing is just a form of Just-In-Time compilation, with all the benefits and detriments of that technology. Your x86 code is essentially running on a virtual machine and being emulated by another processor. The biggest benefit of virtualization right now is the ability to share a single processor among many virtual machines so you have fewer idle CPU cycles, which is more efficient (hardware cost and energy cost).

So it seems to me that code morphing, just like any form of virtualization, is all about being more efficient with resources.

Scott Whitlock
A: 

For another approach to hardware-assisted x86 ISA virtualization, you may want to read about the Loongson 3 CPU.

stormsoul