tags:

views:

85

answers:

6

While HDDs evolve and offer more and more space on less room, why are we "sticking with" 32-bit or 64-bit?

Why can't there be a e.g.: 128-bit processor?

(This is not my homework; I'm just a student interested beyond the things they teach us in informatics)

+3  A: 

There's very little need for this, when do you deal with numbers that large? The current addressable memory space available to 64-bit is well beyond what any machine can handle for at least a few years...and beyond that it's probably more than any desktop will hold for quite a while.

Yes, desktop memory will continue to increase, but 4 billion times what it is now? That's going to take a while...sure we'll get to 128-bit, if the whole current model isn't thrown out before then, which I see equally as likely.

Also, it's worth noting that upgrading something from 32-bit to 64-bit puts you in a performance hole immediately in most scenarios (this is a major reason Visual Studio 2010 remains 32-bit only). The same will happen with 64-bit to 128-bit. The more small objects you have, the more pointers, which are now twice as large, that's more data to pass around to do the same thing, especially if you don't need that much addressable memory space.

Nick Craver
The *The more small objects you have, the more pointers, which are now twice as large* argument is only really relevant if you don't upgrade that 1GB of memory you've been using since you first installed XP :)
slugster
@slugster - My Visual studio 2k8 routinely reaches over 2GB of RAM...and gets very sluggish...more memory used means more memory moved around, accessed, and processed. The problem is that the bandwidth isn't getting *that* much better...when VS eats another several hundred megs for no reason, that's a big hit to performance. I say this from a quad-core machine running 16GB on Win7 64 :)
Nick Craver
+4  A: 

Because the difference between 32-bit and 64-bit is astronomical - it's really the difference between 232 (a ten-digit number in the billions) and 264 (a twenty-digit number in the squillions :-).

64 bits will be more than enough for decades to come.

RichieHindle
... and noone needs more than 640kbyte anyways.
dbemerlin
@dbemerlin: It took us a couple of decades to go from 640K to around 10,000 times that. The jump from 2^32 to 2^64 paves the way for a 4,000,000,000 times increase. OK, so the rate of change of progress is increasing, but I still reckon that's going to last us a couple of decades. :-)
RichieHindle
A: 

The next big thing in processor's architecture will be quantum computing. Instead of beeing just 0 or 1, a qbit has a probability of being 0 or 1.

This will lead to huge improvements in the performance of algorithm (for instance, it will be very easy to crack down any RSA private/public key).

Check http://en.wikipedia.org/wiki/Quantum_computer for more information and see you in 15 years ;-)

Jerome WAGNER
+2  A: 

Cost. Also, what do you think the 128-bit architecture will get you? Memory addressing and such, but to handle it effectively, you need higher bandwidth buses and basically some new instruction languages that handle it. 64-bit is more than enough for addressing (18446744073709551616 bytes).

HDDs still have a bit of ground to catchup to RAM and such. They're still going to be the IO bottleneck I think. Plus, newer chips are just supporting more cores rather than making a massive change to the language.

SB
+1  A: 

When we talk about an n-bit architecture we are often conflating two rather different things:

(1) n-bit addressing, e.g. a CPU with 32-bit address registers and a 32-bit address bus can address 4 GB of physical memory

(2) size of CPU internal data paths and general purpose registers, e.g. a CPU with 32-bit internal architecture has 32-bit registers, 32-bit integer ALUs, 32-bit internal data paths, etc

In many cases (1) and (2) are the same, but there are plenty of exceptions and this may become increasingly the case, e.g. we may not need more than 64-bit addressing for the forseeable future, but we may want > 64 bits for registers and data paths (this is already the case with many CPUs with SIMD support).

So, in short, you need to be careful when you talk about, e.g. a "64-bit CPU" - it can mean different things in different contexts.

Paul R
A: 

The main need for a 64 bit processor is to address more memory - and that is the driving force to switch to 64 bit. On 32 bit systems, you can really only address 4Gb of RAM, at least per process. 4Gb is not much.

64 bits give you an address space of several petabytes.(though, a lot of current 64 bit hardware can address "only" 48 bits - thats still enough to support 256 terrabytes of ram though).

Upping the natural integer sizes for a processor does not automatically make it "better" though. There are tradeoffs. With 128bit you'd need twice as much storage(registers/ram/caches/etc.) compared to 64 bit for common data types - with all the drawback that might have - more ram needed to store data, more data to transmit = slower, wider buses might requires more physical space/perhaps more power, etc.

nos