views:

358

answers:

4

Has anybody got any ideas on this one?

When we run:

printf("%.0f", 40.5)

On a windows box the return is "41" but on our production ubuntu server we're getting "40"

+2  A: 

How about using .round instead? Rails even enhances it so that you can specify the precision (see API doc).

ujh
We wanted to do that in the first place although 40.5.round(0) gives 41.0 which is very annoying. I see from the API doc the .round now accepts nil to just leave 41 and no .0 after. I'm sure .round(nil) in the rails version we're using throws an error. To get around it we are rounding it first with round(0) then applying the string format. Thanks.
tsdbrown
I don't understand tsdbrown's comment. Float#round does not take any argument. BigDecimal#round specifies how many decimal places and ruby-doc.org does not show nil as a legal argument. Also BigDecimal does not distinguish between 41.0 and 41.
ScottJ
A: 

Looks like a simple case of binary floating point imprecision. On one machine you get 40.499999999 which rounds to 40; on the other machine you get 40.500000000001 which rounds to 41.

If you need exact numbers then you should not use binary floating point. You can use fixed point decimal, or decimal floating point.

Edit: you're using BigDecimal, you say. Why not avoid any conversion to float by using #round and then #to_i? (Or #floor or #ceil instead of #round... it's not clear what your goal is.)

b = BigDecimal.new("40.5")
print b.round.to_i  # => 41
ScottJ
I've kept the example very simple but that number is actually coming from a BigDecimal field in the database. I'm sure its the floating point problem you mention just not sure why its considered as a float. You still get the problem with:"%.0f" % BigDecimal("40.5") and "%.0d" % BigDecimal("40.5")
tsdbrown
That's because the BigDecimals here are getting converted to floats (via #to_f) before going to the formatting operation.
ScottJ
In case it wasn't clear, I edited my above answer to address the problem with BigDecimal specifically.
ScottJ
ScottJ, thanks for your help and sorry if I haven't been clear. The goal is to take a BigDecimal from the database and round that number based on the scale which is variable from 0 to 4, this method runs on many different numbers and that's why I've not used .to_i, .ceil, or .floor. Some of the numbers need to be rounded to 0 decimal places, some to 4. Once the number is rounded I needed to display it (pdf/online/email/wherever) in a string. But the client needed any .0 present to be removed, i.e 40.0 must be displayed as 40. That's was the goal.
tsdbrown
The "simple case of binary floating point imprecision" part was a helpful reminder, I overlooked the conversion to float in the original code. I've used BigDecimal#round first based on the scale, then used the format specifier to remove any .0's. Run extensive tests on both platforms and all appears to be in order.
tsdbrown
@ScottJ: 40.5 is exactly representable in binary, so this can't be the problem.
Rick Regan
Interesting point. I have no idea, then.
ScottJ
A: 

use %.0g instead

aykoc
A: 

ScottJ's answer does not explain your problem -- 40.5 is exactly representable in binary. What's probably happening is this: Windows' printf "rounds half up," and Ubuntu's printf "rounds half even."

Rick Regan