views:

587

answers:

3

Yesterday I asked this general question about decimals and their internal precisions. Here is a specific question about the scenario I'm trying to address.

I have column in SqlServer that is typed - decimal(18,6). When I fetch these values, the .net decimals that are created match the precision in the database. They look like this:

1.100000
0.960000
0.939000
0.844400
0.912340

I need to present (ToString) these values according to these rules:

  • Non-zeroes are always shown.
  • Trailing zeroes on or before the nth decimal place are shown.
  • Trailing zeroes after the nth decimal place are not shown.

So, this is what I want when n is 3:

1.100
0.960
0.939
0.8444
0.91234

Now, I have written some code that ToString's the decimal - removing all trailing zeroes, and then analyzes the string looking for a decimal point and counts the number of decimal places to see how many trailing zeroes need to be added back on. Is there a better way to accomplish this?

Also, I know I said ToString in the question above... but if I could modify the decimals on their way out of my data-access layer, such that consumers always get decimals with the proper precision, that would be better. Is it possible to perform this operation on the decimal itself without string manipulation?

+8  A: 

To output your required format with n=3, you could use:

number.ToString("0.000###")

With n as a parameter, then you could build a custom string format:

string format = "0." + new string('0', n) + new string('#', 6 - n);
s = number.ToString(format);
Patrick McDonald
Looks great - thanks.
David B
A: 

Anything you could do on the decimal itself wouldn't be enough for at least the following reason: you couldn't store in a decimal number itself that you want x leading/trailing zeroes to be printed as the value is converted to a string. The decimal data type in .NET is just a value type made up of 128bit, all dedicated to representing the number. No information about formatting when the number is printed is included, and the fact that it's a value type allows it to be quickly passed as an argument on the stack, which is also a relief for the GC.
You could wrap decimal into some class, generate instances of that class in your DAL and return a collection of those if you need to do some number crunching later in the app and if that's not the case you could simply apply the "stringification" in your DAL and return a collection of strings.

emaster70
"all dedicated to representing the number" disagree as shown in my original question : http://stackoverflow.com/questions/1132765/adjusting-decimal-precision-netHere 2 is represented with varying precision.
David B
+2  A: 

You should be able to do this by formatting the value as such:

var str = num.ToString("#0.000#####");

The number of 0s determines the minimum number of digits, and the number of 0s plus #s the maximum number of digits. I'm not sure if you actually want the maximum, but I believe this is the closest you'll get. You could of course just set it to an arbitrary (large) number.

Noldorin