views:

1887

answers:

15

I am new to programming and come from a non-CS background (no formal degree). I mostly program winforms using C#.

I am confused about 32 bit and 64 bit.... I mean, have heard about 32 bit OS, 32 bit processor and based on which a program can have maximum memory. How it affects the speed of a program. There are lot more questions which keep coming to mind.

I tried to go through some Computer Organization and Architecture books. But, either I am too dumb to understand what is written in there or the writers assume that the reader has some CS background.

Can someone explain me these things in a plain simple English or point me to something which does that.

EDIT: I have read things like In 32-bit mode, they can access up to 4GB memory; in 64-bit mode, they can access much much more....I want to know WHY to all such things.

BOUNTY: Answers below are really good....esp one by Martin. But, I am looking at a thorough explanation, but in plain simple English.

+22  A: 

Many modern processors can run in two modes: 32-bit mode, and 64-bit mode. In 32-bit mode, they can access up to 4GB memory; in 64-bit mode, they can access much much more. Older processors only support 32-bit mode.

Operating systems chose to use the processors in one of these modes: at installation time, a choice is made whether to operate the processor in 32-bit mode or in 64-bit mode. Even though the processor can continue to operate in 64-bit mode, switching from 32-bit to 64-bit would require a reinstallation of the system. Older systems only support 32-bit mode.

Applications can also be written in (or compiled for) 32-bit or 64-bit mode. Compatibility here is more tricky, as the processor, when run in 64-bit mode, can still support 32-bit applications as an emulation feature. So on a 64-bit operating system, you can run either 32-bit applications or 64-bit applications. On a 32-bit operating system, you can run only 32-bit applications.

Again, chosing the size is primarily a matter of amount of main memory you want to access. 32-bit applications are often restricted to 2GB on many systems, since the system needs some address space for itself.

From a performance (speed) point of view, there is no significant difference. 64-bit applications may be bit slower because they use 64-bit pointers, so they need more memory accesses for a given operation. At the same time, they may also be a bit faster, since they can perform 64-bit integer operations as one instruction, whereas 32-bit processors need to emulate them with multiple instructions. However, those 64-bit integer operations are fairly uncommon.

One also may wonder what the cost is of running a 32-bit application on a 64-bit processor: on AMD64 and Intel64 processors, this emulation mode is mostly in hardware, so there is no real performance loss over running the 32-bit application natively. This is significantly different on Itanium, where 32-bit (x86) applications are emulated very poorly.

Martin v. Löwis
very good summary.
Preet Sangha
Sir, the question is WHY on all such explanations as "...32-bit mode, they can access up to 4GB memory"
Sandbox
Why does using 64b pointers make a 64b application slower? The whole point of 64b processors is that they can access and use 64b at a time instead of 32b. Slightly larger executable size I could understand, but slower?
Matthew Scharley
The answer to that particular question is in what the largest 32b number is: roughly, 4 billion. This means that a 32 bit pointer has 4 billion different states it can be in, which means that I can point at 4 billion different bytes in memory, which translates to 4 GB.
Matthew Scharley
Re 4GB: simply related to 2^32. If you only have 32-bits to store addresses, you are limited to this. Re making it slower - .NET deals with a **lot** of references (addresses). All work involving references suddenly has twice as much to do... well, it isn't actually linear, but certainly "more" to do.
Marc Gravell
On the 32 bit Intel architecture it's actually possible to access more than 4GB, but it's rarely used. Probably since it's much less practical to code against :)
Thorarin
@Matthew: it's Marc's explanation. Using 64-bit pointers requires more memory IO, thus increasing pressure on the cache. As a consequence, you have more cache misses than in 32-bit mode (on the same hardware - you can of course put larger caches into the CPUs to compensate).
Martin v. Löwis
@Sandbox: Maybe you didn't realize in Marc's response that 2^32 *is* 4GB: 2^32 bytes = 2^22 KB = 2^12 MB = 2^2 GB = 4GB. If a register carrying an address has only 32 bits, you cannot address more than 2^32 memory cells. If you then also want byte addressing (which is common today), you end up with the 4GB limit.
Martin v. Löwis
In 64bit mode many applications consume in many cases up to the double cache size. L1 cache is critical because a miss costs on a Core2Duo CPU around 20 cycles, this are about 60 instructions that can't be executed in the meantime. A L2 cache miss is even worth and can go to 1000 cycles or 3000 executed instructions. Thats why most 64bit applications are slower then 32bit ones.
Lothar
+1  A: 

Martin's answer is excellent. Just to add some additional points... since you mention .NET, you should note that the CLI/JIT has some differences between x86 and x64, with different optimisations (tail-call, for example), and some subtle different behaviour of advanced things like volatile. This can all have an impact on your code.

Additionally, not all code works on x64. Anything that uses DirectX or certain COM features may struggle. Not really a performance feature, but important to know.

(I removed "DirectX" - I might be talking rubbish there... but simply: you need to check that anything you depend upon is stable on your target platform)

Marc Gravell
Microsoft doesn't have a 64bit version of DirectX yet?
Matthew Scharley
Anything that uses DirectX...so if I am not wrong WPF uses DirectX APIs....so a WPF program will have problem running on x64?
Sandbox
I'm not hugely "up" on the DirectX issue - it could be that it is only an issue on XP64, but OK on Vista-64/ Win7-64. Also, WPF can always use the CPU instead of the GPU at a push...
Marc Gravell
I miss software graphics emulation in games... for those of us with beefy computers but onboard graphics cards...
Matthew Scharley
DirectShow (which is related to directx) is actually relevant, quite a few directshow filters are compiled/distributed in 32bit mode only, so to interoperate via dll/com imports you need a 32 bit process.
Sam Saffron
Thanks Sam - glad I wasn't talking *complete* tosh ;-p
Marc Gravell
+2  A: 

Martin's answer is mostly correct and detailed.

I thought I would just mention that all the memory limits are per-application virtual memory limits, not limits for the actual physical memory in the computer. In fact it's possible to work with more than 4Gb of memory in single application even in 32-bit systems, it just requires more work, since it can't all be accessible using pointers at one time. link text

Another thing that was not mentioned is that the difference between traditional x86 processor and x86-64 is not only in the pointer size, but also in the instruction set. While the pointers are larger and consume more memory (8 bytes instead of 4) it is compensated by larger register set (15 general purpose registers instead of 8, iirc), so the performance can actually be better for code that does computational work.

Filip Navara
+1 for mentioning about virtual memory limits and the link. Do you have more such links which will explain stuff like this in plain simple english.
Sandbox
Since you mentioned C#, you may want to read this: http://blogs.msdn.com/rmbyers/archive/2009/06/08/anycpu-exes-are-usually-more-trouble-then-they-re-worth.aspx ... There's also more interesting stuff on the Old New Thing blog, but I don't have any links at the moment.
Filip Navara
A: 

It's worth noting that certain applications(e.g. multimedia encoding/decoding and rendering) gain significant(2x) performance boost when written to fully utilize 64-bit.

See 32-bit vs. 64-bit benchmarks for Ubuntu and Windows Vista

Diaa Sami
Some of this is probably related to changes in the instruction set as well..
Brendan Long
+15  A: 

It really all comes down to wires.

In digital circuits, only 0's and 1's (usually low voltage and high voltage) can be transmitted from one element (CPU) to another element (memory chip). If I have only 1 wire, I can only send either a 1 or a 0 over the wire per clock cycle. This means I can only address 2 bytes (assuming byte addressing, and that entire addresses are transmitted in just 1 cycle for speed!).

If I have 2 wires, I can address 4 bytes. Because I can send: (0, 0), (0, 1), (1, 0), or (1, 1) over the two wires. So basically it's 2 to the power of # of wires.

So if I have 32 wires, I can address 4 GB, and if I have 64 wires, I can address a lot more.

There are other tricks that engineers can do to address a larger address space than the wires allow for. E.g. splitting up the address into two parts and sending one half in the first cycle and the second half on the next cycle. But that means that your memory interface will be half as fast.

Edited my comments into here (unedited) ;) And making it a wiki if anyone has anything interesting to add as well.

Like other comments have mentioned, 2^32 (2 to the power of 32) = 4294967296, which is 4 GB. And 2^64 is 18,446,744,073,709,551,616. To dig in further (and you probably read this in Hennesey & Patterson) processors contains registers that it uses as "scratch space" for storing the results of its computations. A CPU only knows how to do simple arithmetic and knows how to move data around. Naturally, the size of these registers are the same width in bits as the "#-bits" of architecture it is, so a 32-bit CPU's registers will be 32-bits wide, and 64-bit CPU's registers will be 64-bits wide.

There will be exceptions to this when it comes to floating point (to handle double precision) or other SIMD instructions (single-instruction, multiple data commands). The CPU loads and saves the data to and from the main memory (the RAM). Since the CPU also uses these registers to compute memory addresses (physical and virtual), the amount of memory that it can address is also the same as the width of its registers. There are some CPUs that handles address computation with special extended registers, but those I would call "after thoughts" added after engineers realize they needed it.

At the moment 64-bits is quite a lot for addressing real physical memory. Most 64-bit CPUs will omit quite a few wires when it comes to wiring up the CPU to the memory due to practicality. It won't make sense to use up precious motherboard real estate to run wires that will always have 0's. Not to mention in order to have the max amount of RAM with today's DIMM density would require 4 billion dimm slots :)

Other than the increased amount of memory, 64-bit processors offer faster computation for integer numbers larger than 2^32. Previously programmers (or compilers, which is also programmed by programmers ;) would have to simulate having a 64-bit register by taking up two 32-bit registers and handling any overflow situations. But on 64-bit CPUs it would be handled by the CPU itself.

The drawback is that a 64-bit CPU (with everything equal) would consume more power than a 32-bit CPU just due to (roughly) twice the amount of circuitry needed. However, in reality you will never get equal comparison because newer CPUs will be manufactured in newer silicon processes that have less power leakage, allow you to cram more circuit in the same die size, etc. But 64-bit architectures would consume twice as much memory. What was once considered "ugly" of x86's variable instruction length is actually an advantage now compared to architectures that uses a fixed instruction size.

This is more or less kinda answer I was looking for. Can you please elaborate a bit?
Sandbox
There will be exceptions to this when it comes to floating point (to handle double precision) or other SIMD instructions (single-instruction, multiple data commands). The CPU loads and saves the data to and from the main memory (the RAM). Since the CPU also uses these registers to compute memory addresses (physical and virtual), the amount of memory that it can address is also the same as the width of its registers. There are some CPUs that handles address computation with special extended registers, but those I would call "after thoughts" added after engineers realize they needed it.
At the moment 64-bits is quite a lot for addressing real physical memory. Most 64-bit CPUs will omit quite a few wires when it comes to wiring up the CPU to the memory due to practicality. It won't make sense to use up precious motherboard real estate to run wires that will always have 0's. Not to mention in order to have the max amount of RAM with today's DIMM density would require 4 billion dimm slots :)
Other than the increased amount of memory, 64-bit processors offer faster computation for integer numbers larger than 2^32. Previously programmers (or compilers, which is also programmed by programmers ;) would have to simulate having a 64-bit register by taking up two 32-bit registers and handling any overflow situations. But on 64-bit CPUs it would be handled by the CPU itself.
The drawback is that a 64-bit CPU (with everything equal) would consume more power than a 32-bit CPU just due to (roughly) twice the amount of circuitry needed. However, in reality you will never get equal comparison because newer CPUs will be manufactured in newer silicon processes that have less power leakage, allow you to cram more circuit in the same die size, etc. But 64-bit architectures would consume twice as much memory. What was once considered "ugly" of x86's variable instruction length is actually an advantage now compared to architectures that uses a fixed instruction size.
You should really edit your comments into the answer, this is comment abuse :)
Kobi
+1  A: 

to explain WHY 32 bit mode can only access 4GB of RAM:

Maximum accessible memory space = 2n bytes where n is the word length of the architecture. So in a 32 bit architecture, maximum accessible memory space is 232 = 4294967296 = 4GB of RAM.

64 bit architecture would be able to access 264 = LOTS of memory.

Just noticed Tchens comments going over this. Anyways, without a CS background, yes computer organization and architecture books are going to be difficult to understand at best.

wallacer
+1  A: 

Think of a generic computers memory as a massive bingo card with billions of squares. To address any individual square on the board there is a scheme to label each row and column B-5, I-12, O-52..etc.

If there are enough squares on the card eventually you will run out of letters so you will need to start reusing more letters and writing larger numbers to continue to be able to uniquely address each square.

Before you know it the announcer is spouting annoyingly huge numbers and letter combinations to let you know which square to mark on your 10 billion square card. BAZC500000, IAAA12000000, OAAAAAA523111221

The bit count of the computer specifies its limit of the complexity of the letters and numbers to address any specific square.

32-bits means if the card is any bigger than 2^32 squares the computer does not have enough wires and transisters to allow it to uniquely physically address any specific square required to read a value or write a new value to the specified memory location.

64-bit computers can individually address a massive 2^64 squares.. but to do so each square needs a lot more letters and numbers to make sure each square has its own unique address. This is why 64-bit computers need more memory.

Other common examples of addressing limits are local telephone numbers. They are ususally 7-digits 111-2222 or reformatted as a number 1,112,222 .. what happens when there are more than 9,999,999 people who want their own telelphone numbers? You add area codes and country codes and your phone number goes from 7 digits to 10 to 11 taking up more space.

If you are familiar with the impending IPv4 shortage its the same problem.. IPv4 addresses are 32-bits meaning there are only 2^32 (~4 billion) unique IP addresses possible and there are many more people than that alive today.

There is overhead in all schemes I mentioned (computers, phone numbers, IPv4 addresses) where certain portions are reserved for organizational purposes so the usable space is much less.

The performance promise for the 64-bit world is that instead of sending 4 bytes at a time (ABCD) a 64-bit computer can send 8 bytes at a time (ABCDEFGH) so the alphabet is transfered between different areas of memory up to twice as fast as a 32-bit computer. There is also benefit for some applications that just run faster when they have more memory they can use.

In the real world 64-bit desktop processors by intel et al are not really true 64-bit processors and still are limited to 32-bits for several types of operations so in the real world the performance between 32-bit and 64-bit applications is marginal. 64-bit mode gives you more hardware registers to work with which does improve performance but adressing more memory on a "fake" 64-bit processor can also hurt performance in some areas so its ususally a wash. In the future we will be seeing more performance improvements when desktop processors become fully 64-bit.

Einstein
A: 

For non CS person. 64bit will work better for calculations (all kinds of) it will be good also it will allow you to have more RAM.

Also if you have limited RAM (in VPS for example or small-RAM dedicated server) - choose 32 bit, services there will eat less RAM.

does this really answer the question?
Sandbox
A: 
  • The processor uses base-2 to store numbers. Base 2 was probably chosen because it's the "simplest" of all bases: for example the base-2 multiplication table has only 4 cells while base "10" multiplication table has a 100 cells.
  • Before 2003, common PC processors were only "32-bit-capable".
    • That means that the processor's native numerical operations were for 32-bit numbers.
    • You can still do numerical operations for larger numbers, but those would have to be performed by programs executed by the processor, and not be the "primitive actions" (commands in machine-language) supported by the processor like those for 32-bit-integers (at the time)
    • 32 bits were chosen because CPU engineers are fond of powers of 2, and 16-bits weren't enough
  • Why weren't 16 bits enough? With 16 bits you can represent integers in the range of 0-65535
    • 65535 = 1111111111111111 in binary (= 20+21+22...+215 = 216-1)
    • 65535 is not enough because for example, a Hospital management software needs to be able to count more than 65535 patients
    • Usually people consider the size of the computer's memory when discussing how big its integers should be. 65535 is definitely not enough. Computers have way more RAM than that, and it doesn't matter if you count in "Bytes" or bits
  • 32 bits was considered enough for a while. In 2003 AMD Introduced the first 64-bit-capable "x86" processor. Intel soon followed.
  • Actually 16 bit was considered enough a long while ago.
  • It is common practice for lots of hardware and software to be backward-compatible. In this case it means the 64-bit-capable CPUs can also run every software the 32-bit-capable CPUs can.
    • Backward compatibility is strived for as a business strategy. More users will want to upgrade to the better processor if it can also do everything the previous one could.
    • In CPUs backward compatibility means that the new actions the CPU supports are added to the previous machine language. For example the previous machine language may had some specification like "all opcodes starting in 1111 are reserved for future use"
    • In theory this kind of CPU backward computability wouldn't had been necessary as all software could have just been recompiled to the new and not compatible machine-language. However that's not the case because of corporate strategies and political or economical systems. In a Utopic "open source" world, backward compatibility of machine languages would probably not be a concern.
  • The backward computability of x86-64 (the common 64-bit CPUs' machine language) comes in the form of a "compatibility mode". This means that any program wishing to make use of the new cpu capabilities needs to notify the CPU (through the OS) that it should run in "64-bit mode". And then it could use to great new CPU 64-bit capabilities.
  • Therefore, for a program to use the CPU's 64-bit capabilities: The CPU, the OS, and the program, all have to "support 64-bits".
  • 64-bits is enough to give every person in the world several unique numbers. It's probably big enough for most current computing endeavors. It's probably unlikely that future CPUs will shift further to 128 bits. But if they do, that's definitely enough for everything I can imagine, and therefore a 256-bits transition won't be necessary.

I hope this helps.

yairchu
+21  A: 

Let's try to answer this question by looking at people versus computers; hopefully this will shed some light on things for you:

Things to Keep In Mind

  • As amazing as they are, computers are very, very dumb.

Memory

  • People have memory (with the exception, arguably, of husbands and politicians.) People store information in their memory for later use.
    • With a question (e.g, "What is your phone number?") a person is able to retrieve information to give an answer (e.g., "867-5309")
  • All modern computers have memory, and store information in their memory for later use.
    • Because computers are dumb, they can only be asked a very specific question to retrieve information: "What is the value at X in your memory?"
      • In the question above, X is known as an address, which can also be called a pointer.

So here we have a fundamental difference between people and computers: To recall information from memory, computers need to be given an address, whereas people do not. (Well in a sense one could say "your phone number" is an address because it gives different information than "your birthday", but that's another conversation.)

Numbers

  • People use the decimal number system. That means for every digit in a decimal number, the digit can be one of 0, 1, 2, 3, 4, 5, 6, 7, 8, or 9. People have ten options per digit.
  • All modern computers use the binary number system. That means for every digit in a binary number, the digit can only be either 1 or 0. Computers have two options per digit.
    • In computer jargon, a single binary digit is called a bit, short for binary digit.

Addresses

  • Every address in a computer is a binary number.
  • Every address in a computer has a maximum number of digits (or bits) that it can have. This is mostly because the computer's hardware is inflexible (also known as fixed) and needs to know ahead of time that an address will only be so long.
  • Terms like "32-bit" and "64-bit" are talking about the longest address for which a computer can store and retrieve information. In English "32-bit" in this sense means "This computer expects instructions about its memory to have addresses no more than 32 binary digits long."
    • As you can imagine, the more bits a computer can handle the longer the address it can look up and therefore the more memory it can manage at one time.

32-bit v. 64-bit Addressing

  • For an inflexible (fixed) number of digits (e.g. 2 decimal digits) the possible numbers you can represent is called the range (e.g. 00 to 99, or 100 unique numbers). Adding an additional decimal digit multiplies the range by 10 (e.g. 3 decimal digits -> 000 to 999, or 1000 unique numbers).
  • This applies to computers, too, but because they are binary machines instead of decimal machines, adding an additional binary digit (bit) only increases the range by a factor of 2.

    Addressing Ranges:
    • 1-bit addressing lets you talk about 2 unique addresses (0 and 1).
    • 2-bit addressing lets you talk about 4 unique addresses (00, 01, 10, and 11).
    • 3-bit addressing lets you talk about 8 unique addresses (000, 001, 010, 011, 100, 101, 110, and 111).
    • and after a long while... 32-bit addressing lets you talk about 4,294,967,296 unique addresses.
    • and after an even longer while... 64-bit addressing lets you talk about 18,446,744,073,709,551,616 unique addresses. That's a LOT of memory!

Implications

What all this means is that a 64-bit computer can store and retrieve much more information than a 32-bit computer. For most users this really doesn't mean a whole lot because things like browsing the web, checking email and playing Solitaire all work comfortably within the confines of 32-bit addressing. Where the 64-bit benefit will really shine is in areas where you have a lot of data the computer will have to churn through. Digital signal processing, gigapixel photography and advanced 3D gaming are all areas where their massive amounts of data processing would see a big boost in a 64-bit environment.

fbrereto
I liked this explanation. When you described Memory, I expected the word 'associative' but that would be too CS. People retrieve memories by association, not by address.
pavium
This should be THEE selected answer. +1 from me. none of the other high scorers came close to this excellent explanation.
San Jacinto
Excellent answer
Sandbox
This is a terrific layman's explanation. I'm definitely going to use this approach next time I'm asked about this topic.
Rob Sobers
+1  A: 

I don't think I've seen much of the word 'register' in the previous answers. A digital computer is a bunch of registers, with logic for arithmetic and memory to store data and programs.

But first ... digital computers use a binary representation of numbers because the binary digits ('bits') 0 and 1 are easily represented by the two states (on/off) of a switch. Early computers used electromechanical switches; modern computers use transistors because they're smaller and faster. Much smaller, and much faster.

Inside the CPU, the switches are grouped together in registers of a finite length, and operations are typically performed on entire registers: For example, add this register to that, and so on. As you would expect, a 32-bit CPU has registers 32 bits long. I'm simplifying here, but bear with me.

It makes sense to organise the computer memory as a series of 'locations', each holding the same number of bits as a CPU register: for example, load this register from that memory location. Actually, if we think of memory as bytes, that's just a convenient fraction of a register and we migh load a register from a series of memory locations (1, 2, 4, 8).

As transistors get smaller, additional logic for more complex arithmetic can be implemented in the limited space of a computer chip. CPU real estate is always at a premium.

But with improvements in chip fabrication, more transistors can be reliably made on only slightly larger chips. Registers can be longer and the paths between them can be wider.

When the registers which hold the addresses of memory locations are longer, they address larger memories and data can be manipulated in larger chunks. In combination with the more complex arithmetic logic, things get done faster.

And isn't that what we're all after?

pavium
A: 

This is a very simple explanation, given that everything above is quite detailed.

32-bit refers to the registers. Registers are places to store data, and all programs operate by manipulating these things. Assembly operates directly on them (and hence why people are often excited to program in assembly).

32-bit means the basic set of registers can hold 32-bits ofinformation. 64-bit means, unsurprisingly, 64 bits of info.

Why can this make programs faster? Because you can do larger operations faster. It will only make certain types of programs faster, by the way. Games, typically, can take great advantage of optimising per processor, because of their math-heavy operations (and hence register use).

But amusingly, as tchen mentioned, their are many other 'things' that let you perform larger operations anyway. SSE, SSE2, etc, will have 64-bit registers and 128-bit registers, even on a '32 bit' system.

The increased ability to address memory speaks directly to the increase in basic register size, based on (I imagine) Windows' specific memory-addressing system.

Hope that helps a little. other posters are much more accurate than me, I am just trying to explain very simply (it helps that I know very little :)

Noon Silk
A: 

I have a wonderful answer for this question, but it doesn't fit all within in this answer block.... The simple answer is that for your program to get a byte out of memory, it needs an address. In 32-bit CPUs, the memory address of each byte is stored in a 32-bit (unsigned) integer, which as a maximum value of 4 GB. When you use a 64 bit processor, the memory address is a 64 bit integer, which gives you about 1.84467441 × 10^19 possible memory addresses. This really should suffice if you are new to programming. You should really be focusing more on learning how to program, than about the internal workings of your processor, and why you can't access more than 4 GB of RAM on your 32 bit CPU.

Kibbee
+5  A: 

Let me tell you the story of Binville, a small town in the middle of nowhere. Binville had one road leading to it. Every person either coming to or leaving Binville had to drive on this road. But as you approached the town, there was a fork. You could either go left or go right.

In fact, every road had a fork in it, except for the roads leading up to the homes themselves. Those roads simply ended at the house. None of the roads had names; they didn't need names thanks to an ingenious addressing scheme created by the Binville Planning Commission. Here's a map of Binville, showing the roads and the houses:

              ------- []  00
             /
       ------
      /      \
     /        ------- []  01
-----
     \        ------- []  10
      \      /
       ------
             \
              ------- []  11

As you can see, each house has a two-digit address. That address alone is enough to a) uniquely identify each house (there are no repeats) and b) tell you how to get there. It's easy to get around town, you see. Each fork is labeled with a zero or one, which the Planning Commission calls the Binville Intersection Tracer, or bit for short. As you approach the first fork, look at the first bit of the address. If it's a zero, go left; if it's a one, go right. Then look at the second digit when you get to the second fork, going left or right as appropriate.

Let's say you want visit your friend who lives in Binville. She says she lives in house 10. When you get to Binville's first fork, go right (1). Then at the second fork, go left (0). You're there!

Binville existed like this for several years but word started to get around about its idyllic setting, great park system, and generous health care. (After all, if you don't have to spend money on street signs, you can use it on better things.) But there was a problem. With only two bits, the addressing scheme was limited to four houses!

So the Planning Commission put their heads together and came up with a plan: they would add a bit to each address, thereby doubling the number of houses. To implement the plan, they would build a new fork at the edge of town and everyone would get new addresses. Here's the new map, showing the new fork leading into town and the new part of Binville:

                     ------- []  000
                    /
              ------
             /      \
            /        ------- []  001
       -----                            Old Binville
      /     \        ------- []  010
     /       \      /
    /         ------
   /                \
  /                  ------- []  011
--
  \                  -------     100
   \                /
    \         ------
     \       /      \
      \     /        ------- []  101
       -----                            New Binville (some homes not built yet)
            \        -------     110
             \      /
              ------
                    \
                     -------     111

Did you notice that everyone in the original part of Binville simply added a zero to the front of their address? The new bit represents the new intersection that was built. When the number of bits is increased by one, the number of addresses doubles. The citizens always knew the maximum size of their town: all they had to do was compute the value of two raised to the power of the number of bits. With three bits, they could have 23 = 8 houses.

A few years went by and Binville was once again filled to capacity. More people wanted to move in, so another bit was added (along with the requisite intersection), doubling the size of the town to sixteen houses. Then another bit, and another, and another... Binville's addresses were soon at sixteen bits, able to accommodate up to 216 (16,384) houses, but it wasn't enough. The people kept coming and coming!

So the Planning Commission decided to solve the problem once and for all: they would jump all the way to thirty-two bits. With sufficient addresses for over four billion homes (232), surely that would be enough!

And it was... for about twenty-five years, when Binville was no longer a small town in the middle of nowhere. It was now a major metropolis. In fact, it was getting to be as big as a whole nation with billions of residents. But the parks were still nice and everyone had great health care, so the population kept growing.

Faced with the ever-increasing population, the Planning Commission once again put their heads together and proposed another expansion of the city. This time they would use 64 bits. Do you know how many homes could fit within the Binville city limits now? That's right: 18,446,744,073,709,551,616. That number is so big, we could populate about two billion Earths and give everyone their own address.

Using 64 bits wasn't a panacea for all their addressing problems. The addresses take twice as much space to write as the old 32-bit addresses did. Worse, some citizens hadn't yet updated their addresses to use the new 64-bit format, so they were forced into a walled-off section of the city reserved specifically for those still using 32-bit addresses. But that was OK: the people using 32 bits had access to more than enough of the city to suit their needs. They didn't feel the need to change just yet.

Will 64 bits be enough? Who knows at this time, but citizens of Binville are waiting for the announcement of 128-bit addresses...

Barry Brown
A: 

Simple answer to explain addressable memory range with 32 bit processors is:

Lets assume You have only 3 digit numbers allowed to construct so maximum number u can go upto is 999. Range of numbers is (0 - 999). You have just 1000 numbers to use.

But if u are allowed to have 6 digit numbers then the maximum number you can construct is 999999. Now range is (0 - 999999). So now u have 1 million numbers with you to use.

Similarly more bits you are allowed to have in a processor, larger set of addresses(numbers in previous example) you can construct and eventually use to store data etc..

Anything simpler than this would interesting to read!

-AD.

goldenmean