My opinion on it is that they are there to help avoid magic numbers.
Magic numbers are basically anywhere in your code that you have an aribtrary number floating around. For example:
int i = 32;
This is problematic in the sense that nobody can tell why i is getting set to 32, or what 32 signifies, or if it should be 32 at all. It's magical and mysterious.
In a similar vein, I'll often see code that does this
int i = 0;
int z = -1;
Why are they being set to 0 and -1? Is this just coincidence? Do they mean something? Who knows?
While Decimal.One
, Decimal.Zero
, etc don't tell you what the values mean in the context of your application (maybe zero means "missing", etc), it does tell you that the value has been deliberately set, and likely has some meaning.
While not perfect, this is much better than not telling you anything at all :-)
Note
It's not for optimization. Observe this C# code:
public static Decimal d = 0M;
public static Decimal dZero = Decimal.Zero;
When looking at the generated bytecode using ildasm, both options result in identical MSIL. System.Decimal
is a value type, so Decimal.Zero
is no more "optimal" than just using a literal value.