views:

150

answers:

4

(i) If a Program is optimised for one CPU class (e.g. Multi-Core Core i7) by compiling the Code on the same , then will its performance be at sub-optimal level on other CPUs from older generations (e.g. Pentium 4) ... Optimizing may prove harmful for performance on other CPUs..?

(ii)For optimization, compilers may use x86 extensions (like SSE 4) which are not available in older CPUs.... so ,Is there a fall-back to some non-extensions based routine on older CPUs..?

(iii)Is Intel C++ Compiler is more optimizing than Visual C++ Compiler or GCC..

(iv) Will a truly Multi-Core Threaded application will perform effeciently on a older CPUs (like Pentium III or 4)..?

+2  A: 

Hi

i) It is probably true that optimising code for execution on CPU X will make that code less optimal on CPU Y than the same code optimised for execution on CPU Y. Probably.

ii) Probably not.

iii) Impossible to generalise. You have to test your code and come to your own conclusions.

iv) Probably not.

For every argument about why X should be faster than Y under some set of conditions (choice of compiler, choice of CPU, choice of optimisation flags for compilation) some clever SOer will find a counter-argument, for every example a counter-example. When the rubber meets the road the only recourse you have is to test and measure. If you want to know whether compiler X is 'better' than compiler Y first define what you mean by better, then run a lot of experiments, then analyse the results.

Regards

Mark

High Performance Mark
Microsoft and Intel are collaborating on developing Software for Multi-Core CPUs. I wonder why is Intel interested in developing Compilers themselves (Intel Pro Compilers) when they are helping Microsoft in developing Visual C++ compilers... and Hence I asked whether there is any difference between Visual C++ and Intel C++ compilers... Thanks..
Traditionally hardware manufacturers have always had an interest in developing compilers for their own hardware, as a necessary part of marketing. And the manufcturers have often been in the best position to tailor compiler back-ends to new hardware features ready for product launch. Even today not everyone who uses Intel CPUs uses MS VS for software development.
High Performance Mark
Remember that there are a whole lot of Intel chips out there that don't run Windows. Macintoshes use Intel processors now, and there's Linux and other x86 Unixes. In particular, there's a whole lot of appliances (e.g., routers) that use Intel chips and some variant of Linux, which probably swamps all other uses of Linux in number.
David Thornley
+2  A: 

Compiling on a platform does not mean optimizing for this platform. (maybe it's just bad wording in your question.)

In all compilers I've used, optimizing for platform X does not affect the instruction set, only how it is used, e.g. optimizing for i7 does not enable SSE2 instructions.

Also, optimizers in most cases avoid "pessimizing" non-optimized platforms, e.g. when optimizing for i7, typically a small improvement on i7 will not not be chosen if it means a major hit for another common platform.

It also depends in the performance differences in the instruction sets - my impression is that they've become much less in the last decade (but I haven't delved to deep lately - might be wrong for the latest generations). Also consider that optimizations make a notable difference only in few places.

To illustrate possible options for an optimizer, consider the following methods to implement a switch statement:

  • sequence if (x==c) goto label
  • range check and jump table
  • binary search
  • combination of the above

the "best" algorithm depends on the relative cost of comparisons, jumps by fixed offsets and jumps to an address read from memory. They don't differ much on modern platforms, but even small differences can create a preference for one or other implementation.

peterchen
For "Optimisation via Compilation" I'm referring to those optimizations that are performed by "Optimizing Compilers" automatically while converting C++ source to Assembly like Intel C++ (hence I said optimization by compiling) and also I'm not referring to ones that we can do manually or those which are algorithm specific..... Thanks...
Hmm, I'm not seeing a way to implement a switch statement with binary search that isn't slower, bigger, and more complex than a plain jump table. C's switch statement is pretty clearly designed to be implemented as a jump table.
Ken
@Ken: A switch statement using an `int` with a very large but sparse range would be a bad choice for a jump table, could be a bad fit for sequential `if`s, and might do well with a binary search. I don't see that a binary search would be useful for any switch statement with reasonably contiguous cases.
David Thornley
I agree that it's a rather artificial option - but it would make sense if comparisons are expensive (e.g. 32 bit values on a 24 bit platform). --- The point was to illustrate that there are often different options.
peterchen
A: 

I) If you did not tell the compiler which CPU type to favor, the odds are that it will be slightly sub-optimal on all CPUs. On the other hand, if you let the compiler know to optimize for your specific type of CPU, then it can definitely be sub-optimal on other CPU types.

II) No (for Intel and MS at least). If you tell the compiler to compile with SSE4, it will feel safe using SSE4 anywhere in the code without testing. It becomes your responsibility to ensure that your platform is capable of executing SSE4 instructions, otherwise your program will crash. You might want to compile two libraries and load the proper one. An alternative to compiling for SSE4 (or any other instruction set) is to use intrinsics, these will check internally for the best performing set of instructions (at the cost of a slight overhead). Note that I am not talking about instruction instrinsics here (those are specific to an instruction set), but intrinsic functions.

III) That is a whole other discussion in itself. It changes with every version, and may be different for different programs. So the only solution here is to test. Just a note though; Intel compilers are known not to compile well for running on anything other than Intel (e.g.: intrinsic functions may not recognize the instruction set of a AMD or Via CPU).

IV) If we ignore the on-die efficiencies of newer CPUs and the obvious architecture differences, then yes it may perform as well on older CPU. Multi-Core processing is not dependent per se on the CPU type. But the performance is VERY dependent on the machine architecture (e.g.: memory bandwidth, NUMA, chip-to-chip bus), and differences in the Multi-Core communication (e.g.: cache coherency, bus locking mechanism, shared cache). All this makes it impossible to compare newer and older CPU efficiencies in MP, but that is not what you are asking I believe. So on the whole, a MP program made for newer CPUs, should not be using less efficiently the MP aspects of older CPUs. Or in other words, just tweaking the MP aspects of a program specifically for an older CPU will not do much. Obviously you could rewrite your algorithm to more efficiently use a specific CPU (e.g.: A shared cache may permit you to use an algorithm that exchanges more data between working threads, but this algo will die on a system with no shared cache, full bus lock and low memory latency/bandwidth), but it involves a lot more than just MP related tweaks.

Juice
A: 

(1) Not only is it possible but it has been documented on pretty much every generation of x86 processor. Go back to the 8088 and work your way forward, every generation. Clock for clock the newer processor was slower for the current mainstream applications and operating systems (including Linux). The 32 to 64 bit transition is not helping, more cores and less clock speed is making it even worse. And this is true going backward as well for the same reason.

(2) Bank on your binaries failing or crashing. Sometimes you get lucky, most of the time you dont. There are new instructions yes, and to support them would probably mean trap for an undefined instruction and have a software emulation of that instruction which would be horribly slow and the lack of demand for it means it is probably not well done or just not there. Optimization can use new instructions but more than that the bulk of the optimization that I am guessing you are talking about has to do with reordering the instructions so that the various pipelines do not stall. So you arrange them to be fast on one generation processor they will be slower on another because in the x86 family the cores change too much. AMD had a good run there for a while as they would make the same code just run faster instead of trying to invent new processors that eventually would be faster when the software caught up. No longer true both amd and intel are struggling to just keep chips running without crashing.

(3) Generally, yes. For example gcc is a horrible compiler, one size fits all fits no one well, it can never and will never be any good at optimizing. For example gcc 4.x code is slower on gcc 3.x code for the same processor (yes all of this is subjective, it all depends on the specific application being compiled). The in house compilers I have used were leaps and bounds ahead of the cheap or free ones (I am not limiting myself to x86 here). Are they worth the price though? That is the question.
In general because of the horrible new programming languages and gobs of memory, storage, layers of caching, software engineering skills are at an all time low. Which means the pool of engineers capable of making a good compiler much less a good optimizing compiler decreases with time, this has been going on for at least 10 years. So even the in house compilers are degrading with time, or they just have their employees to work on and contribute to the open source tools instead having an in house tool. Also the tools the hardware engineers use are degrading for the same reason, so we now have processors that we hope to just run without crashing and not so much try to optimize for. There are so many bugs and chip variations that most of the compiler work is avoiding the bugs. Bottom line, gcc has singlehandedly destroyed the compiler world.

(4) See (2) above. Don't bank on it. Your operating system that you want to run this on will likely not install on the older processor anyway, saving you the pain. For the same reason that the binaries optimized for your pentium III ran slower on your Pentium 4 and vice versa. Code written to work well on multi core processors will run slower on single core processors than if you had optimized the same application for a single core processor.

The root of the problem is the x86 instruction set is dreadful. So many far superior instructions sets have come along that do not require hardware tricks to make them faster every generation. But the wintel machine created two monopolies and the others couldnt penetrate the market. My friends keep reminding me that these x86 machines are microcoded such that you really dont see the instruction set inside. Which angers me even more that the horrible isa is just an interpretation layer. It is kinda like using Java. The problems you have outlined in your questions will continue so long as intel stays on top, if the replacement does not become the monopoly then we will be stuck forever in the Java model where you are one side or the other of a common denominator, either you emulate the common platform on your specific hardware, or you are writing apps and compiling to the common platform.

dwelch