views:

103

answers:

2

I am curious to know if there exists an architecture that is quicker for handling array intensive calculations. For example; will some arbitrary C code ran on a MIPS workstation complete quicker than when ran on a x86 workstation? I suppose a better question is do some architectures have a faster FPU?

I am aware that one of the newer options for heavy array and matrix operations is GPU computing and using tools such as Nvidia's CUDA, but I'm more interested in what exists at the traditional CPU level. Thanks!

+2  A: 

Back in the day, many supercomputers used vector processors which essentially operate on multiple array elements simultaneously. They're still used in a few niche fields (the wiki article mentions video games), but are not available for a typical desktop. Probably the closest thing you'll find in a standard desktop will involve using the GPU for non-graphics work.

rmeador
Also, the SIMD instructions of modern processors (SSE/MMX) are inspired by the vector processors of yore.
Ranieri
+1  A: 

Sure there are architectural strengths and weaknesses. There was a while in there when PowerPC chips consistently beat their Intel counterparts in a range of floating point intensive benchmarks. And that difference was reflected in how fast they ran some of the nuclear physics codes I was working on at the time.

However, they ran Word and Excel like dogs, and then their clock speeds really started to slip, and it was all over for PowerPC chips on the Desktop. I image Apple only stuck with them for as long as they did so that they could get the multi-architecture execution layer working smoothly.

Also note that many architectures have SIMD (Single Instruction Multiple Data, i.e. vectorized) floating point and integer arithmetic units these days (e.g. AltiVec, MMX, SSE, etc). Those these are only mildly vectorized in the general purpose chips when compared to special purpose processors like GPUs.

dmckee