Why is it considered to be such a big deal to have a 64-bit computer? Why does it "change everything?" Why do applications need to be designed differently between 32- and 64-bit platforms?
And, on OS X, how do you find which one you have!?
Why is it considered to be such a big deal to have a 64-bit computer? Why does it "change everything?" Why do applications need to be designed differently between 32- and 64-bit platforms?
And, on OS X, how do you find which one you have!?
The biggest impact that people will notice at the moment is that a 32bit PC can only address a maximum of 4GB of memory. When you take off memory allocated for other uses by the operating system your PC will probably only show around 3.25GB of usable memory. Move over to 64bit and this limit disappears.
If your doing serious developement then this could be very important. Try running several virtual machines and you soon run out of memory. Servers are more likely to need the extra memory and so you will find that 64bit usage is far greater on servers than desktops. Moore's law ensures that we will have ever more memory on machines and so at some point desktops will also switch over to 64bit as the standard.
For a much more detailed description of the processor differences check out this excellent article from ArsTechnica.
With a 32-bit machine you only have 4,294,967,295 bytes of memory to address. With a 64-bit machine you have 1.84467441 × 10^19 bytes of memory.
64-bit processors calculate particular tasks (such as factorials of large figures) twice as fast as working in 32-bit environments (given example is derived from comparison between 32-bit and 64-bit Windows Calculator; noticeable for factorial of say 100 000). This gives a general feeling of theoretical possibilities of 64-bit optimized applications.
While 64-bit architectures indisputably make working with large data sets in applications such as digital video, scientific computing, and large databases easier, there has been considerable debate as to whether they or their 32-bit compatibility modes will be faster than comparably-priced 32-bit systems for other tasks. In x86-64 architecture (AMD64), the majority of the 32-bit operating systems and applications are able to run smoothly on the 64-bit hardware.
Sun's 64-bit Java virtual machines are slower to start up than their 32-bit virtual machines because Sun has only implemented the "server" JIT compiler (C2) for 64-bit platforms.[9] The "client" JIT compiler (C1), which produces less efficient code but compiles much faster, is unavailable on 64-bit platforms.
It should be noted that speed is not the only factor to consider in a comparison of 32-bit and 64-bit processors. Applications such as multi-tasking, stress testing, and clustering (for high-performance computing), HPC, may be more suited to a 64-bit architecture given the correct deployment. 64-bit clusters have been widely deployed in large organizations such as IBM, HP and Microsoft, for this reason.
64-bit architectures refers to the way memory is addressed. In a 64-bit machine, the cpu can address up to 2^64 bytes, which is significantly larger than what a 32-bit machine can address (which is 2^32 bytes, or about 3.2gigs). come when new applications and servers need more and more ram, 64-bit will become the norm.
Basically you can do everything to a bigger scale:
The 2 big types of 64-bit architectures are x64 and IA64 architectures. But x64 is the most popular by far.
x64 can run x86 commands as well as x64 commands. IA64 runs x86 commands as well, but it doesn't do SSE extensions. There is hardware dedicated on Itanium for running x86 instructions; it's an emulator, but in hardware.
As @Phil mentioned you can get a deeper look of how it works here.
Here is the obligatory Wikipedia article.
The reason why it's a big deal is because in a 32-bit system, programs can only utilize 2^32 addresses (4 GB), whereas in a 64-bit system they can utilize 2^64 addresses (17.2 billion GB).
Intel-powered Macs are all 64-bit as far as I know.
In addition to the fact that 64-bit machines can easily address more memory (it isn't true to say that 32-bit machines can only access 4GB as PAE can be used in many cases to use more) the 64-bit processors also often have additional hardware registers and other hardware optimizations. These additional features can often significantly increase the performance of apps compiled for 64-bit processors, even if they don't use a lot of memory.
1) speed. if an atomic 1 cycle operation can move 64 bits instead of just 32, a lot of operations go faster. 2) I haven't kept up on memory management schemes so maybe it doesn't work this way anymore but this also means you can directly address more memory. 3) With great power comes great responsibility. Okay, that's not as applicable here. 4) Marketing. Intel and AMD can put a big number 64 on their box instead of just 32. Everybody knows bigger numbers are better.
On OS X, you have a 64bit CPU if you have a G5 or almost any of the Intel machines (the very first Yonah based machines were 32bit, everything with a Core 2 is 64bit).
As far as the OS is concerned, Leapord is the first version of the OS to support 'GUI' 64bit programs.
Not sure I can answer all your questions without writing a whole essay (there's always Google...), but you don't need to design your apps differently for 64bit. I guess what is being referred to is that you have to be mindful of things like pointer sizes are no longer the same size as ints. And you have a whole load of potential problems with inbuilt assumptions on certain types of data being four bytes long that may no longer be true.
This is likely to trip up all kinds of things in your application - everything from saving/loading from file, iterating through data, data alignment, all the way to bitwise operations on data. If you have an existing codebase you are trying to port, or work on both, it is likely you will have a lot of little niggles to work through.
I think this is an implementation issue, rather than a design one. I.e. I think the "design" of say, a photo editing package will be the same whatever the wordsize. We write code that compiles to both 32bit and 64bit versions, and the design certainly does not differ between the two - it's the same codebase.
The fundamental "big deal" on 64bit is that you gain access to a much larger memory address space than 32bit. This means that you can really chuck in more than 4Gb of memory into your computer and actually have it make a difference.
I'm sure other answers will go into the details and benefits more than I.
In terms of detecting the difference then programatically you just check for the size of a pointer (e.g. sizeof (void*)). The answer of 4 means its 32 bits, and 8 means you are running in a 64bit environment.
Think about image processing for a moment. If you look at medical imaging, you're routinely dealing with moderately high resolution images that are 32 bits per channel, so if they're color, that's 96 bits per pixel. A typical image may take up 200M or more when uncompressed. Processing that into a target buffer will require another 200M, so in one operation you would be using up 1/5 of your entire address space on a 32 bit processor. Without a great deal of care, heap fragmentation makes that operation impossible. Virtual memory doesn't help because the address space itself isn't there. 64M is much more breathing room.
A 32 Bit process has a virtual addresses space of 4 GB; this might be too little for some apps. A 64 Bit app has a virtually unlimited address space (of course it is limited, but you will most likely not hit this limit).
On OSX there are other advantages. See the following article, why having the kernel run in 64 Bit address space (regardless if your app runs 64 or 32) or having your app run in 64 Bit address space (while the kernel is still 32 Bit) leads to much better performance. To summarize: If either one is 64 Bit (kernel or app, or both of course), the TLB ("translation lookaside buffer") doesn't have to be flushed whenever you switch from kernel to use space and back (which will speed up RAM access).
Also you have performance gains when working with "long long int" variables (64 Bit variables like uint64_t). A 32 Bit CPU can add/divide/subtract/multiply two 64 Bit values, but not in a single hardware operation. Instead it needs to split this operation into two (or more) 32 Bit operations. So an app that works a lot with 64 Bit numbers will have a speed gain of being able to do 64 Bit math directly in hardware.
Last but not least the x86-64 architecture offers more registers than the classic x86 architectures. Working with registers is much faster than working with RAM and the more registers the CPU has, the less often it needs to swap register values to RAM and back to registers.
To find out if your CPU can run in 64 Bit mode, you can look at various sysctl variables. E.g. open a terminal and type
sysctl machdep.cpu.extfeatures
If it lists EM64T, your CPU supports 64 Bit address space according to x86-64 standard. You can also look for
sysctl hw.optional.x86_64
If it says 1 (true/enabled), your CPU supports the x86-64 Bit mode, if it says 0 (false/disabled), it does not. If the setting is not found at all, consider it being false.
Note: You can also fetch sysctl variables from within a native C app, no need to use the command line tool. See
man 3 sysctl
Besides the obvious memoryspace issues that most people are mentioning here, I think it is worth looking at the notion of "broadword computing" that Knuth (among others) has been speaking about lately. There are a lot of efficiencies to be gained through bit manipulation, and bitwise operations on a 64-bit word go a lot further than on a 32-bit word. In short, you can do more operations in registers without having to hit memory, and from a performance perspective, that's a pretty huge win.
Take a look at Volume 4, pre-Fascicle 1A for some examples of the cool tricks I am talking about.
To answer the second part of your question, OS X Leopard is designed to run on 32 and 64 bit machines. When you run a 64 bit processor Leopard will use the 64 bits libraries.
Another point to this in regards to Microsoft Windows is that for many years there has been the Win32 API which is intended for 32-bit operating systems and isn't optimized for 64 bit compiling. When I write some DLLs for my applications, I generally compile in Win32 which isn't the 64 bit version of things. Prior to Vista, there haven't been many successful 64 bit versions of Windows I believe as where I work my new machine has 4 GB of RAM but I'm still using 32-bit Windows XP Pro as it is a known stable O/S relative to XP64 or Vista.
I think you may want to also look back on when there was the shift from 16-bit to 32-bit for some more details on why the shift may be a big deal for some folks. The mission-critical applications that a company may run on a desktop, e.g. small accounting packages, may not run on a 64-bit operating system and thus there is the need to keep a legacy machine around, virtual or real.
Changing the size of an address can have some big ramifications and repercussions.
Some game-playing programs use a bit-board representation. Chess, checkers and othello for example have an 8x8 board, ie 64 squares, so having at least 64 bits in a machine word significantly helps performance.
I remember reading about a chess program whose 64-bit build was almost twice as fast as the 32-bit version.
Nothing is free: although 64-bit applications can access more memory than 32-bit applications, the downside is that they need more memory. All those pointers that used to need 4 bytes, now they need 8. For example, the default requirement in Emacs is 60% more memory when it's built for a 64-bit architecture. This extra footprint hurts performance at every level of the memory hierarchy: bigger executables take longer to load from disk, bigger working sets cause more paging and bigger objects mean fewer fit in the processor caches. If you think about a CPU with a 16K L1 cache, a 32-bit application can work with 4096 pointers before it misses and goes to the L2 cache but a 64-bit application has to reach for the L2 cache after just 2048 pointers.
On x64 this is mitigated by the other architectural improvements like more registers, but on PowerPC if your application can't use >4G it's likely to run faster on "ppc" than "ppc64". Even on Intel there are workloads that run faster on x86, and few run more than a 5% faster on x64 than x86.
This thread is too long already, but ...
Most of the replies focus on the fact that you have a larger, 64-bit address space, so you can address more memory. For about 99% of all applications, this is totally irrelevant. Large whoop.
The real reason 64-bit is good is not that the registers are bigger, but there are twice as many of them! That means that the compiler can keep more of your values in register instead of spilling them to memory and loading them back in a few instructions later. If and when an optimizing compiler is unrolling your loops for you, it can unroll them roughly twice as much, which can really help performance.
Also, the subroutine caller/callee conventions for 64-bit have been defined to keep most of the passed parameters in registers instead of the caller pushing them onto the stack and the callee poping them off.
So a "typical" C/C++ application will get about a 10% or 15% performance improvement just by recompiling for 64-bit. (Assuming some portion of the app was compute bound. Of course, this is not guarenteed; All computers wait a the same speed. Your Mileage May Vary.)
Note that addressspace can be used for more than (real) memory. One can also memory map large files, which can improve performance in more odd access patterns because the more powerful and efficient block-level VM level caching kicks in.
Some of the things said in this thread (like the doubling of # registers) only apply to x86-> x86_64, not to 64-bit in general. Just like the fact that under x86_64 one guaranteed has SSE2, 686 opcodes and a cheap way to do PIC. These features are strictly not about 64-bit.
Moreover quite often people point to doubling of registers as the cause of the speedup, while it is more likely the default SSE2 use that does the trick (accelerating memcpy and similar functions). If you enable the same set for x86 the difference is way smaller. (*)
Also keep in mind that there is often an initial penalty involved because the average data structure will increase simply because the size of a pointer is larger. This has also cache effects, but is more significantly noticable in the fact that the average memcpy() (or whatever the equivalent for memory copy is in your language) will take longer. This is only in the magnitude of a few percent btw, but the speedups named above are also in that magnitude.
Usually alignment overhead is also bigger on 64-bit architectures, blowing up structures even more.
Overall, my simple tests indicate they will roughly cancel each other out, if drivers and runtime libraries have fully adapted, giving no significant speed difference for the average app. However some apps can suddenly get faster (e.g. when depending on AES) or slower (crucial datastructure is constantly moved around/scanned/walked and contains a lot of pointers)
Note that most JIT-VM languages (Java, .NET) use a significantly more pointers on average (internally) than e.g. C++. Probably their memory use increases more than for the average program, but I don't dare to equate that directly to slowing effects (since these are really complex and funky beast and often hard to predict without measuring)
(*) a little known fact is that the number of SSE registers also doubles in 64-bit mode
Apart from the already mentioned advantages here are some more regarding security:
Another advantage that comes to mind is that the amount of virtual contiguous memory allocated with vmalloc()
in the Linux kernel can be larger in 64 bit mode.