views:

1023

answers:

5

When 64-bit processors came out, it wasn't too big of a deal. Sure, C++ people had to deal with the fact that their 32-bit pointer math doesn't work on 64-bit machines, but that's what you get for not using sizeof (or not using Java!), but in most cases, our languages already had 64-bit primitives, so all the compiler needed to do was just use the new 64-bit instructions.

What about the future? What is going to happen if Intel decides it would be a great idea to come out with 128-bit processors? How are languages going to adapt?

I see a few outcomes:

  1. We add new 128-bit primitives (and hence keywords) and possibly break existing code.
  2. We silently (like how C++ does) increase our existing primitives to 64 and 128 bits (This won't work for Java, since int is defined as 32-bits).
  3. We stay with 64-bit forever.
  4. There is a paradigm shift in new languages where primitive types are not defined.

I am leaning towards 3, but hope for 4.

An example of outcome #4 is if integers are real mathematical integers, as in, they do not have bounds. Floating point types would ask how many bits you want. The compiler or run time would then pick the right instruction depending on the hardware available.

How would you store such an integer? Well, it would be variable length like a String. How would you write fixed length bytes out? You would probably need a byte primitive that goes from 0-255, this way you can convert these boundless integers to fixed byte arrays.

What do you think is going to happen?

+7  A: 

You say:

"When 64-bit processors came out, it wasn't too big of a deal"

Actually, that's not at all true. Probably the first well-known 64-bit systems were DEC's Alpha boxes, which were introduced c. 1992

Back then many programmers like myself spent a lot of time fixing open source code that assumed that sizeof(int) == sizeof(void*).

That porting code to x64 is considered easy these days is only because much of the real pain was suffered 15+ years ago.

Alnitak
Keep reading to the second sentence, I mention this.
Pyrolistical
Yes, you do, but back then that lesson really hadn't been learnt.
Alnitak
hop
hop, there you go discussing with people again. quit it.
Karl
Pain wasn't suffered 15 years ago, it was suffered 30 years ago. People, disbelieving that IBM made 32 bit machines, assumed all the world's a PDP-11 with 16 bit ints. VAX arrived with 32 bits and suffering was monumental. The lesson was learned, so transition from 8086 to 80386 went smoothly.
Windows programmer
+10  A: 

The porting problems only really occur when you change the size of a pointer and you break old assumptions about how much memory is needed to represent an address. If a chip has instructions to do arithmetic on 128 bit integers, CLR will simply gain a 128 bit integer type, and it would be sensible for C++ to do the same. But if pointers became 128 bit, that would create a whole new pain for C++ programs, everywhere someone has assumed that 64-bits will be enough.

So the question becomes: do we need 128-bit memory addressing?

Bill Gates is famous for supposedly saying that no one will even need to address more than 640 KB - probably not what he actually said. But I think this is different. 64-bits is enough to address 18 billion terabytes. Think about how much HD video that is. It's such an enormous number. Some have argued (e.g. Knuth) that 64-bit addressing itself is overkill.

So I don't think we will need to go to 128 bits for addressing memory, at least in our lifetimes, and in the lifetime of today's languages, tools and platforms. Memory and computing power may have been doubling at regular intervals, but that's not the same thing as doubling the number of bits needed to address it! It's the same thing as adding one bit, every time you double the memory requirement. If the memory available doubles every two years, that means you only need add one bit every two years on average. So if 32-bit processors really took off in the mid-nineties, we won't need 128-bits until 2187 - that's assuming memory capacities continue to grow in the same way, which economically is dependent on lots of people having a need for such a thing as a flat memory space that is big enough to address every atom in a mountain of raw silicon.

And the added cost of lumping around 8 bytes for every pointer in every structure will make computers more power hungry (for no real purpose in almost all imaginable applications).

So I'm predicting YES to your 3rd possibility.

Daniel Earwicker
A nit: if address space needed doubles every 2 years, starting from 32 bits around 2000, then it exceeds 64 bits around 2064, not 2187. The figure you gave is for the transition *after* the 128-bit transition.
Darius Bacon
Sounds like you're assuming we'd have to jump to 128 as soon as we exceed 64 - not so. Lots of architectures in the past had bitcounts that were not themselves powers of 2. So (as I said) I was figuring when we'd need 128, not when we'd need more than 64.
Daniel Earwicker
The next figure would definitely be 128 bit. Its so much work to increase the bit count pass 64 bit, they might as well double it.See Geometric expansion:http://en.wikipedia.org/wiki/Dynamic_array
Pyrolistical
There are some reasons why you might want larger addresses besides exhausting memory. A good example of this is Single Address Space OSes, which simplify shared memory concurrency by giving every resource its own, globally unique address. This exhausts address space far more quickly than regular memory allocation.
TokenMacGuy
A: 

To phrase what @Earwicker: said a different way:

If we assume that Moore's Law maps to address space, then we will want 1 additional bit in our addresses every 18 months. By that math, the move from 32- to 64-bit gives us 48 years until we need larger addresses.

That's long enough that many other things we take for granted about computers today to change, making this discussion unlikely to produce much value today.

Jay Bazuzi
The last 7 * 18 months have seen about 10 bits, not just 7, added to the size of a memory address or disk sector address in a common desktop PC.
Windows programmer
A: 

As a followup to the slow growth in real address space, I learned this year that a brand new Intel quad-core chip, while supposedly a 64-bit chip, actually supports only a 48-bit address space. The most significant 16 bits are required to be equal to bit 47. (This is called a 'canonical form' in Intel's documentation.) So if we double memory every 2 years we still have 32 years left before we have to worry about using what we have now. And oh yes, if we have only 16GB on the motherboard we are really only using 34 bits' worth of physical addresses, so we have some room to grow to get to 48 yet :-)

At least Intel is doing the smart thing and not letting programmers mess with those high bits. I remember the original IBM 370 at a 32-bit word but only 24 bits of the address were actually used by the hardware. (We had to get special permission to submit jobs that would use all sixteen megabytes!!) Anyway, when the day of reckoning came, IBM had to transition from 370 to 370-XA, and it turned out clever programmers had stored all sorts of useful information in those most significant 8 bits. It was a real headache for those who had to port CP and VM/CMS.

Anyway, 64-bit pointers should outlast us :-)

Norman Ramsey
It wasn't just clever programming, some of it was required. All functions had to support varargs even if they didn't use varargs and even if C wasn't going to be invented until 5 years later. The last argument had to have its most significant bit set to 1.
Windows programmer
+1  A: 

How about a 5th option: the word size of processors will begin to shrink.

One of the biggest performance bottlenecks right now isn't the processor but the RAM. Doubling the size of pointers and memory addresses will make the data in RAM bigger, meaning it takes longer to read into the CPU, filling up more cache, bigger binaries to load from disk, etc. Most of those extra bits will never serve a useful purpose (unless you're planning on keeping your current CPU for 50 years while upgrading the rest...)

That problem's been around for a long time, and it's been solved for a long time too - ARM CPUs are designed to switch between 32 and 16 bit code quickly to minimise the number of wasted bits. x86-64 and PPC are similar but nowhere near as fine-grained, and there's probably many other architectures that can do the same thing. I think by the time we've got a real need for "128-bit" processors, there'll be barely any code using 64 bits apart from the likes of malloc().

Ant P.