views:

320

answers:

3

Is there a reason that a C# System.Decimal remembers the number of trailing zeros it was entered with? See the following example:

public void DoSomething()
{
    decimal dec1 = 0.5M;
    decimal dec2 = 0.50M;
    Console.WriteLine(dec1);            //Output: 0.5
    Console.WriteLine(dec2);            //Output: 0.50
    Console.WriteLine(dec1 == dec2);    //Output: True
}

The decimals are classed as equal, yet dec2 remembers that it was entered with an additional zero. What is the reason/purpose for this?

+1  A: 

Decimals represent fixed-precision decimal values. The literal value 0.50M has the 2 decimal place precision embedded, and so the decimal variable created remembers that it is a 2 decimal place value. Behaviour is entirely by design.

The comparison of the values is an exact numerical equality check on the values, so here, trailing zeroes do not affect the outcome.

David M
"Fixed-precision" could be misleading here. It's a floating point type, like `float` and `double` - it's just that the point is a decimal point instead of a binary point (and the limits are different).
Jon Skeet
+6  A: 

It can be useful to represent a number including its accuracy - so 0.5m could be used to mean "anything between 0.45m and 0.55m" (with appropriate limits) and 0.50m could be used to mean "anything between 0.495m and 0.545m".

I suspect that most developers don't actually use this functionality, but I can see how it could be useful sometimes.

I believe this ability first arrived in .NET 1.1, btw - I think decimals in 1.0 were always effectively normalized.

Jon Skeet
For the reasons explained, 0.5 and 0.50 *do have* different information. Precision is **very** relevant in some fields, namely mathematics and chemistry.
ANeves
+1  A: 

I think it was done to provide a better internal representation for numeric values retrieved from databases. Dbase engines have a long history of storing numbers in a decimal format (avoiding rounding errors) with an explicit specification for the number of digits in the value.

Compare the SQL Server decimal and numeric column types for example.

Hans Passant