views:

2529

answers:

3

Is there a difference in double size when I run my app on 32 and 64 bit environment?

If I am not mistaken the double in 32 bit environment will take up 16 digits after 0, whereas the double in 64 bit will take up 32 bit, am I right?

+14  A: 

No, an IEEE 754 double-precision floating point number is always 64 bits. Similarly, a single-precision float is always 32 bits.

If your question is about C# and/or .NET specifically (as your tag would indicate), all of the data type sizes are fixed, independent of your system architecture. This is the same as Java, but different from C and C++ where type sizes do vary from platform to platform.

It is common for the integral types to have different sizes on different architectures in C and C++. For instance, int was 16 bits wide in 16-bit DOS and 32 bits wide in Win32. However, the IEEE 754 standard is so ubiquitous for floating-point computation that the sizes of float and double do not vary on any system you will find in the real world--20 years ago double was 64 bits and so it is today.

John Kugelman
It's worth noting that that CLR (c#) and JVM are bytecode VM's, which must be portable across architectures (by design, there are counterexamples), which explains why the types are the same regardless of host. C/C++ are oriented toward machine code compilation, and therefore typically have ABI differences to make optimal use of their target architectures. This explains the difference.
TokenMacGuy
+2  A: 

In c# double is always 8 bytes (64 bits)

Learner
+3  A: 

It doesn't change.

A simple way to check for this is writing a simple console app with

Console.WriteLine(Double.MaxValue);

and compiling to both x86 and x64.

statenjason