Update
OK, after some investigation, and thanks in big part to the helpful answers provided by Jon and Hans, this is what I was able to put together. So far I think it seems to work well. I wouldn't bet my life on its total correctness, of course.
public static int GetSignificantDigitCount(this decimal value)
{
/* So, the decimal type is basically represented as a fraction of two
* integers: a numerator that can be anything, and a denominator that is
* some power of 10.
*
* For example, the following numbers are represented by
* the corresponding fractions:
*
* VALUE NUMERATOR DENOMINATOR
* 1 1 1
* 1.0 10 10
* 1.012 1012 1000
* 0.04 4 100
* 12.01 1201 100
*
* So basically, if the magnitude is greater than or equal to one,
* the number of digits is the number of digits in the numerator.
* If it's less than one, the number of digits is the number of digits
* in the denominator.
*/
int[] bits = decimal.GetBits(value);
if (value >= 1M || value <= -1M)
{
int highPart = bits[2];
int middlePart = bits[1];
int lowPart = bits[0];
decimal num = new decimal(lowPart, middlePart, highPart, false, 0);
int exponent = (int)Math.Ceiling(Math.Log10((double)num));
return exponent;
}
else
{
int scalePart = bits[3];
// Accoring to MSDN, the exponent is represented by
// bits 16-23 (the 2nd word):
// http://msdn.microsoft.com/en-us/library/system.decimal.getbits.aspx
int exponent = (scalePart & 0x00FF0000) >> 16;
return exponent + 1;
}
}
I haven't tested it all that thoroughly. Here are a few sample inputs/outputs, though:
Value Precision 0 1 digit(s). 0.000 4 digit(s). 1.23 3 digit(s). 12.324 5 digit(s). 1.2300 5 digit(s). -5 1 digit(s). -5.01 3 digit(s). -0.012 4 digit(s). -0.100 4 digit(s). 0.0 2 digit(s). 10443.31 7 digit(s). -130.340 6 digit(s). -80.8000 6 digit(s).
Using this code, I imagine I would accomplish my goal by doing something like this:
public static decimal DivideUsingLesserPrecision(decimal x, decimal y)
{
int xDigitCount = x.GetSignificantDigitCount();
int yDigitCount = y.GetSignificantDigitCount();
int lesserPrecision = System.Math.Min(xDigitCount, yDigitCount);
return System.Math.Round(x / y, lesserPrecision);
}
I haven't really finished working through this, though. Anybody who wants to share thoughts: that would be much appreciated!
Original Question
Suppose I have write this code:
decimal a = 1.23M;
decimal b = 1.23000M;
Console.WriteLine(a);
Console.WriteLine(b);
The above will output:
1.23 1.23000
I find that this also works if I use decimal.Parse("1.23")
for a
and decimal.Parse("1.23000")
for b
(which means this question applies to cases where the program receives user input).
So clearly a decimal
value is somehow "aware" of what I'll call its precision. However, I see no members on the decimal
type that provide any way of accessing this, aside from ToString
itself.
Suppose I wanted to multiply two decimal
values and trim the result to the precision of the less precise argument. In other words:
decimal a = 123.4M;
decimal b = 5.6789M;
decimal x = a / b;
Console.WriteLine(x);
The above outputs:
21.729560302171195125816619416
What I'm asking is: how could I write a method that would return 21.73
instead (since 123.4M
has four significant figures)?
To be clear: I realize I could call ToString
on both arguments, count the significant figures in each string, and use this number to round the result of the calculation. I'm looking for a different way, if possible.
(I also realize that in most scenarios where you're dealing with significant figures, you probably don't need to be using the decimal
type. But I'm asking because, as I mentioned in the beginning, the decimal
type appears to include information about precision, whereas double
does not, as far as I know.)