views:

460

answers:

9

Hello,

I was just wondering what other CPU architectures are available other than INTEL & AMD. So, found List of CPU architectures on Wikipedia.

It categorizes notable CPU architectures into following categories.

  1. Embedded CPU architectures
  2. Microcomputer CPU architectures
  3. Workstation/Server CPU architectures
  4. Mini/Mainframe CPU architectures
  5. Mixed core CPU architectures

I was analyzing the purposes and have few doubts. I taking Microcomputer CPU (PC) architecture as reference and comparing others.

Embedded CPU architecture:

  • They are a completely new world.
  • Embedded systems are small & do very specific task mostly real time & low power consuming so we do not need so many & such wide registers available in a microcomputer CPU (typical PC). In other words we do need a new small & tiny architecture. Hence new architecture & new instruction RISC.
  • The above point also clarifies why do we need a separate operating system (RTOS).

Workstation/Server CPU architectures

  • I don't know what is a workstation. Someone clarify regarding the workstation.
  • As of the server. It is dedicated to run a specific software (server software like httpd, mysql etc.). Even if other processes run we need to give server process priority therefore there is a need for new scheduling scheme and thus we need operating system different than general purpose one. If you have any more points for the need of server OS please mention.
  • But I don't get why do we need a new CPU Architecture. Why cant Microcomputer CPU architecture do the job. Can someone please clarify?

Mini/Mainframe CPU architectures

  • Again I don't know what are these & what miniframes or mainframes used for? I just know they are very big and occupy complete floor. But I never read about some real world problems they are trying to solve. If any one working on one of these. Share your knowledge.
  • Can some one clarify its purpose & why is it that microcomputer CPU archicture not suitable for it?
  • Is there a new kind of operating system for this too? Why?

Mixed core CPU architectures

  • Never heard of these.

If possible please keep your answer in this format:

XYZ CPU architectures

  • Purpose of XYZ
  • Need for a new architecture. why can't current microcomputer CPU architecture work? They go upto 3GHZ & have upto 8 cores.
  • Need for a new Operating System Why do we need a new kind of operating system for this kind of archictures?

EDIT:

Guys, this is not a homework problem. I can't do anything to make you guys believe. I don't know if the question is not clear or something else but I'm only interested in just specific technical details.

Let me put a part of this question in another way. You are in an interview and if the interviewer asks you "tell me, Microcomputer processors are fast & a lot capable and our PC operating systems are good. Why do we need a different architecture like SPARC, Itanium and need a different OS like Windows Server for servers?". What would you answer? I hope got my point.

+1  A: 

Mainframe

  • Processes massive amount of information with a lot of instructions executing at the same time.
  • A home (PC/desktop) computer can't cope with running a lot of code at the same time, not even processing a lot of data.
  • An operating system specific to the particular architecture makes it more efficient for the specific hardware.

HW Architecture Example

A weather mainframe processing real-time information from sensors in different states.

OS Architecture Example

Let's say the normal command to draw something is: DRAW "text". That's on a normal PC. Now, let's say you have a lot of screens and want to draw the same thing on each, with this PC you will have to call DRAW "text" for each. However, you might just make some hardware with a command "DRAWS" which automatically draws the same text on each screen: DRAWS "text"

Christian Sciberras
A: 

In a nutshell: any design must satisfy some requirements. In satisfying any complex set of requirements there will have to be compromises made, satisfying requirement X to the n-th degree may make it impossible to satisfy requirement Y. So, whether you are talking about CPUs or washing-machines there will be variety of designs to meet variety of requirements.

The situation is made more complex, but not essentially changed, by the evolution of both technologies and of requirements over time.

High Performance Mark
A: 

Could you for example solve all the transportation problems in the world if the only vehicle was a automatic transmission toyota pickup (the old small ones not the newer full size)?

Why would you ever need something else?

Well not everyone can drive a stick, not everyone fits in a toyota (I am thinking height more than width). You cant carry the family. You cannot haul large objects, certainly not efficiently. How do you get the trucks to the dealer to sell? Drive them one at a time?

If we used a server class processor in our television remote control we would need an extension cord and a cooling fan, or would need to replace the batteries on every button press and wait for it to boot first.

Rtoses and operating systems, same answer as above. You dont use an rtos in a low powered microcontroller, not normally, you often have rom measured in hundreds of bytes and ram measured in tens of bytes. No room for bloatware. Purpose built software on purpose built hardware.

Look at the ARM vs Intel thing going on now, Intel is horrible at hardware design, their success is purely in conference rooms and telecons not in hardware on a motherboard. You can get the same performance using alternate instructions sets from alternate vendors at a fraction of the initial and operating cost. Why settle on one ancient solution?

Few operating systems are reliable, same with compilers and hardware for that matter. Some software and hardware is designed for performance or reliability but not necessarily for user friendliness. I dont want the landing gear lever to cause the pilot to have to reach over to a mouse and check the okay button on the "are you sure you want to deploy the landing gear window", and then watch the hourglass spin while it thinks about whether to do it or not.

For the same reason you need a pickup truck for some jobs and a tractor-trailer for others, you need one class of machine (and software) for home desktop, another for small-medium business servers and another for large corporations. You cannot just make a pickup smaller and infinitely bigger depending on the job, you need more wheels sometimes, enclosures or not, more or fewer seats, power takeoffs, hydraulics or not, etc depending on the task it is designed for.

Where would we be if we had stopped at the 8 bit processor running CP/M? It solves all the worlds problems why would ever need to develop an alternative? 100% of the innovations, cost savings, performance increases are the result of questioning the current solution and trying something different.

One size fits all fits no one well.

dwelch
-1 I thank you for your time and effort to make me understand the stuff. But I'm not dumb enough to know understand why do we need some different solutions. They solve different problems. But that is just an overview. I'm looking for specific technical details not philosophy about why do we need to do have different things. Thats why instead of just asking what I need I wrote technical details of what I know and thats why I also requested an answer format.
claws
claws
+4  A: 

Workstations are now almost-extinct form of computers. Basically they used to be high-end computers looking like desktops, but with some important differences, such as RISC processors, SCSI drives instead of IDE and running UNIX or (later) NT line of Windows operating systems. Mac Pro can be seen as a present form of workstation.

Mainframes are big (though they do not necessarily occupy whole floor) computers. They provide very high availibility (most parts of a mainframe, including processors and memory, can be replaced without system going down) and backwards compatibility (many modern mainframes can run unmodified software written for '70 mainframes).

The biggest advantage of x86 architecture is compatibility with x86 architecture. CISC is usually considered obsolete, that's why most modern architectures are RISC based. Even new Intel & AMD processors are RISC under the hood.

In the past, gap between home computers and "professional" hardware was much bigger than today, so "microcomputer" hardware was inadequate for servers. When most of RISC "server" architectures (SPARC, PowerPC, MIPS, Alpha) were created, most microcomputer chips were still 16-bit. First 64 bit PC chip (AMD Opteron) shipped over 10 years after MIPS R4000. The same was with operating systems: PC operating systems (DOS and non-NT Windows) simply were inadequate for servers.

In embedded systems, x86 chips are simply not enough power efficient. ARM processors provide comparable processing power using much less energy.

el.pescado
claws
http://en.wikipedia.org/wiki/IA-32#Current_implementations
el.pescado
But, perhaps, I should say "are somehow RISC-like under the hood".
el.pescado
+1 for "The biggest advantage of x86 architecture is compatibility with x86 architecture".
Tomasz Łazarowicz
+3  A: 

I don't know what is a workstation. Someone clarify regarding the workstation.

Workstations used to be a class of systems intended to be used by single (or alternating) users for tasks that demanded more computing power than a PC offered. They basically died out in the 1990s as economics of scale in R&D allowed standard PC hardware to offer the same (and eventually more) performance for a much lower price.

Workstations were made by companies such as Sun, SGI and HP. They usually ran a proprietary Unix variant and often had specialized hardware as well. Typical applications were scientific computing, CAD and high-end graphics.

"Workstation architectures" were characterized by the goal to deliver high performance for single-user applications with price as a very secondary consideration.

Michael Borgwardt
A: 

what type of architechture of mainframe server ? why we use mainframe servers insteed of others??

neha
@neha: This not a regular forum. If you need to ask something you need to comment in the question or in the answer relevant.
claws
+1  A: 

It will probably help to consider what the world was like twenty years ago.

Back then, it wasn't as expensive to design and build world-class CPUs, and so many more companies had their own. What happened since is largely explainable by the increasing price of CPU design and fabs, which means that that which sold in very large quantities survived a lot better than that which didn't.

There were mainframes, mostly from IBM. These specialized in high throughput and reliability. You wouldn't do anything fancy with them, it being much more cost-effective to use lower-cost machines, but they were, and are, great for high-volume business-type transactions of the sort programmed in COBOL. Banks use a lot of these. These are specialized systems. Also, they run programs from way back, so compatibility with early IBM 360s, in architecture and OS, is much more important than compatibility with x86.

Back then, there were minicomputers, which were smaller than mainframes, generally easier to use, and larger than anything personal. These had their own CPUs and operating systems. I believe they were dying at the time, and they're mostly dead now. The premier minicomputer company, Digital Equipment Corporation, was eventually bought by Compaq, a PC maker. They tended to have special OSes.

There were also workstations, which were primarily intended as personal computers for people who needed a lot of computational power. They had considerably cleaner designed CPUs than Intel's in general, and at that time it meant they could run a lot faster. Another form of workstation was the Lisp Machine, available at least in the late 80s from Symbolics and Texas Instruments. These were CPUs designed to run Lisp efficiently. Some of these architectures remain, but as time went on it became much less cost-effective to keep these up. With the exception of Lisp machines, these tended to run versions of Unix.

The standard IBM-compatible personal computer of the time wasn't all that powerful, and the complexity of the Intel architecture held it back considerably. This has changed. The Macintoshes of the time ran on Motorola's 680x0 architectures, which offered significant advantages in computational power. Later, they moved to the PowerPC architecture pioneered by IBM workstations.

Embedded CPUs, as we know them now, date from the late 1970s. They were characterized by being complete low-end systems with a low chip count, preferably using little power. The Intel 8080, when it came out, was essentially a three-chip CPU, and required additional chips for ROM and RAM. The 8035 was one chip with a CPU, ROM, and RAM on board, correspondingly less powerful, but suitable for a great many applications.

Supercomputers had hand-designed CPUs, and were notable for making parallel computing as easy as possible as well as the optimization of the CPU for (mostly) floating-point multiplication.

Since then, mainframes have stayed in their niche, very successfully, and minicomputer and workstations have been squeezed badly. Some workstation CPUs stay around, partly for historical reasons. Macintoshes eventually moved from PowerPC to Intel, although IIRC the PowerPC lives on in Xbox 360 and some IBM machines. The expense of keeping a good OS up to date grew, and modern non-mainframe systems tend to run either Microsoft Windows or Linux.

Embedded computers have also gotten better. There's still small and cheap chips, but the ARM architecture has become increasingly important. It was in some early netbooks, and is in the iPhone, iPad, and many comparable devices. It has the virtue of being reasonably powerful with low power consumption, which makes it very well suited for portable devices.

The other sort of CPU you'll run into on common systems is the GPU, which is designed to do high-speed specialized parallel processing. There are software platforms to allow programming those to do other things, taking advantage of their strengths.

The difference between desktop and server versions of operating systems is no longer fundamental. Usually, both will have the same underlying OS, but the interface level will be far different. A desktop or laptop is designed to be easily usable by one user, while a server needs to be administered by one person who's also administering a whole lot of other servers.

I'll take a stab at mixed core, but I might not be accurate (corrections welcome). The Sony Playstation 3 has a strange processor, with different cores specialized for different purposes. Theoretically, this is very efficient. More practically, it's very hard to program a mixed-core system, and they're rather specialized. I don't think this concept has a particularly bright future, but it's doing nice things for Sony sales in the present.

David Thornley
+1  A: 

It seems like your question and goal is really to understand the history of Computer Architecture. If that is true then you need this book. It should help you to gain the understanding that you are looking for:

http://www.amazon.com/Computer-Architecture-Concepts-Evolution-2/dp/0201105578

Dr. Brooks covers the history of computer architecture, the initial appearance of new ideas and traces the development of these ideas through different machines over time.

James Branigan
+1  A: 

One addition for Embedded CPU architecture: they have to be usually cheaper than mainstream processors, so that they do not raise the product's life considerably.

Mixed core CPU architectures

  • They are usually used where there is a need for high throughput, speed and/or lower power requirements - embedded applications, DSPs, cryptography, gaming, high performance computing.

  • Mixed core architectures offer one or many specialized cores that fit a specific problem domain in addition to the General Purpose (GP) core. The specialized cores can be used as accelerators for a specific part of the application that is considered to be the bottleneck. Although one can achieve the same performance by adding more GP cores, this may be impractical because of technology used, die size, power constraints, dissipated heat or programmability - the specialized cores do one thing, or at least a couple of things, faster and more efficient than a GP core. They exist for the same reasons as why graphics cards use a different architecture in their GPUs.

  • Mainstream OSes are written and optimized for mainstream CPUs. They are compiled targeting a mainstream processor architecture. Moreover, the specialized cores are usually not generic enough to run their OS. So we don't explicitly need a new OS, just modifications to allow the system to recognize and use the specialized cores - either through a library or through a driver. Using the specialized core needs partial recompilation so that the executable code targets the specialized core.

Some notes:

  • Mainstream chips are effectively mixed-cores. They have MMX, SSE, SSE2, SSE3 instructions, floating point instructions and some times cryptographic extensions. This effectively makes them a "mixed-core" architecture. However, they are so popular that are included in the microcomputer processor category. Think of AMD's Fusion and Intel Larrabbee.

  • x86 is so popular because there is a lot of research, effort and investment to make good tools (compilers, debuggers etc) for them. Moreover, the majority of the programs are closed source and compiled for x86, so you cannot run them on any other architecture. Finally, a lot of code has hand-written optimizations or assumptions in the code that it will be compiled and executed on an x86. This would require a partial application rewrite to compile for a different architecture.

  • Another good reason for different architectures is control and tight integration of different subsystems. IBM has their own CPUs (PowerPC), OS (AIX) and libraries, offering a optimally tuned package that is difficult to go away from once you have bought it. Same goes for Sun (now Oracle) with the SPARC and Solaris and a few years back with HP with HP-RISC and HP/UX. It is not evil or anything like that: they offer a package that fits your application exactly and they know and can reproduce easily if something goes wrong because they are familiar with all aspects of the system, both hardware and software.

ipapadop