views:

1931

answers:

5

Simple question - why does the Decimal type define these constants? Why bother?

I'm looking for a reason why this is defined by the language, not possible uses or effects on the compiler. Why put this in there in the first place? The compiler can just as easily in-line 0m as it could Decimal.Zero, so I'm not buying it as a compiler shortcut.

A: 

My opinion on it is that they are there to help avoid magic numbers.

Magic numbers are basically anywhere in your code that you have an aribtrary number floating around. For example:

int i = 32;

This is problematic in the sense that nobody can tell why i is getting set to 32, or what 32 signifies, or if it should be 32 at all. It's magical and mysterious.

In a similar vein, I'll often see code that does this

int i = 0;
int z = -1;

Why are they being set to 0 and -1? Is this just coincidence? Do they mean something? Who knows?

While Decimal.One, Decimal.Zero, etc don't tell you what the values mean in the context of your application (maybe zero means "missing", etc), it does tell you that the value has been deliberately set, and likely has some meaning.

While not perfect, this is much better than not telling you anything at all :-)

Note It's not for optimization. Observe this C# code:

public static Decimal d = 0M;
public static Decimal dZero = Decimal.Zero;

When looking at the generated bytecode using ildasm, both options result in identical MSIL. System.Decimal is a value type, so Decimal.Zero is no more "optimal" than just using a literal value.

Orion Edwards
Some plus info: http://gregbeech.com/blogs/tech/archive/2008/07/07/emitting-decimal-pseudo-constants-with-reflection-emit.aspx
boj
But there is no Int32.Zero property...
Guffa
Your argument hurts my head. They are numbers, they only become magic numbers when you start attaching random meaning to them such as -1 means do the dishes and 1 means bake cakes. Decimal.One is just as magical as 1 but arguably harder to read (but perhaps more optimal).
fuzzy-waffle
my point was that if someone types Decimal.Zero, they are more likely to have done that deliberately because Zero has some meaning - rather than just arbitrarily setting it to 0
Orion Edwards
I take issue with saying assigning something to 0 is arbitrary. It makes sense for enumerations with symbolic constants being mapped to numbers but for numbers mapped to numbers seems pretty insane. Sometimes 0 is really... zero. Units on the other hand would be a nice construct. 1km != 1.
fuzzy-waffle
Saying Decimal.Zero is just as arbitrary as saying 0.0. If we were talking about something that changes on an operating system level, like "/" vs some library constant that describes the filesystem separator, it would make sense, but Decimal.Zero is always just 0.0.
Benson
Damn, it was just my opinion. I thought I made that clear :-(
Orion Edwards
Visual Studio could do some dynamagic to allow any arbitrary number, e.g., Decimal.ThreeHundredTwentySevenPointThreeSixTwoFive. This would solve the magic number problem forever! :)
James M.
I have in a past job be told I must have consts for numbers like "1", so as to advoid magic numbers in my code. I wish all coding standards starting of my saying the reader must have a brain!
Ian Ringrose
A: 

Because a Decimal structure is rather large and those values are commonly used.

Each time you use one of these properties the code doesn't have to contain a 16 byte literal value (or a four byte integer value and a call to a conversion routine).

Guffa
This makes a ton of sense actually...
Jasmine
This is incorrect. Setting a decimal to Decimal.Zero produces identical MSIL to if you just set it to 0. Decimal is a struct, not a reference type
Orion Edwards
I believe Orion is correct.
Rob P.
+11  A: 

Some .NET languages do not support decimal as a datatype, and it is more convenient (and faster) in these cases to write Decimal.ONE instead of new Decimal(1).

Java's BigInteger class has ZERO and ONE as well, for the same reason.

mihi
Can someone explain this to me? 6 up votes means it's got to be a good answer - but:How does a .Net language that doesn't support decimal as a datatype benefit from having a shared read-only property that returns a decimal and is defined as part of the decimal class?
Rob P.
He probably meant that if a language doesn't have decimal literals, using a constant would be more efficient than converting an int literal to a decimal. Every .NET language supports the System.Decimal datatype, it's part of the CLR.
Niki
yeah Niki that is what I wanted to say. You can of course use System.Decimal in all .NET languages, but some support it better (like C# which has a decimal keyword and decimal literals) and some worse. Sorry, English is not my native language...
mihi
Thank you for the explanation. That makes sense now.
Rob P.
+9  A: 

Small clarification. They are actually static readonly values and not constants. That has a distinct difference in .Net because constant values are inlined by the various compilers and hence it's impossible to track their usage in a compiled assembly. Static readonly values however are not copied but instead referenced. This is advantageous to your question because it means the use of them can be analyzed.

If you use reflector and dig through the BCL, you'll notice that MinusOne and Zero are only used with in the VB runtime. It exists primarily to serve conversions between Decimal and Boolean values. Why MinusOne is used coincidentally came up on a separate thread just today (link)

Oddly enough, if you look at the Decimal.One value you'll notice it's used nowhere.

As to why they are explicitly defined ... I doubt there is a hard and fast reason. There appears to be no specific performance and only a bit of a convenience measure that can be attributed to their existence. My guess is that they were added by someone during the development of the BCL for their convenience and just never removed.

JaredPar
A: 

Those 3 values arghhh !!!

I think they may have something to do with what I call trailing 1's

say you have this formula :

(x)1.116666 + (y) = (z)2.00000

but x, z are rounded to 0.11 and 2.00, and you are asked to calculate (y).

so you may think y = 2.00 - 1.11. Actually y equals to 0.88 but you will get 0.89. (there is a 0.01 in difference).

Depends on the real value of x and y the results will varies from -0.01 to +0.01, and in some cases when dealing with a bunch of those trailing 1's, and to facilate things, you can check if the trailing value equals to Decimal.MinusOne / 100, Decimal.One / 100 or Decimal.Zero / 100 to fix them.

this is how i've made use of them.

metro
What are you talking about? You're doing bad math with inexact values and then using the division operator to come up with your epsilon? What makes you think that the result will always be off by exactly 0.01, and not (for example) 0.005? If these values represent money, I'd be scared to do business with your application.
Daniel Pryden
:) I'm not calculating an end result, i have to give the value of (y) for the rounded format. how will you do it ?
metro