views:

47

answers:

1

Hey all, quick question: How does a .NET decimal type get represented in binary in memory?

We all know how floating-point numbers are stored and the thusly the reasons for the inaccuracy thereof, but I can't find any information about decimal except the following:

  1. Apparently more accurate than floating-point numbers
  2. Takes 128 bits of memory
  3. 2^96 + sign range
  4. 28 (sometimes 29?) total significant digits in the number

Is there any way I can figure this out? The computer scientist in me demands the answer and after an hour of attempted research, I cannot find it. It seems like there's either a lot of wasted bits or I'm just picturing this wrong in my head. Can anyone shed some light on this please? Thanks.

+6  A: 

Decimal.GetBits for the information you want.

Basically it's a 96 bit integer as the mantissa, plus a sign bit, plus an exponent to say how many decimal places to shift it to the right.

So to represent 3.261 you'd have a mantissa of 3261, a sign bit of 0 (i.e. positive), and an exponent of 3. Note that decimal isn't normalized (deliberately) so you can also represent 3.2610 by using a mantissa of 32610 and an exponent of 4, for example.

I have some more information in my article on decimal floating point.

Jon Skeet
+1 fantastic answer, right to the point and rich with information.
JoshD