views:

441

answers:

4

I've noticed substantial pain over this constructor (even on this forum). People use it even though the documentation clearly states:

The results of this constructor can be somewhat unpredictable http://java.sun.com/javase/6/docs/api/java/math/BigDecimal.html#BigDecimal(double)

I've even seen a JSR-13 being APPROVED with a recommendation stating:

Existing specifications that might be deprecated: We propose deprecating the BigDecimal(double) constructor, which currently gives results that are different to the Double.toString() method.

Despite all this, the constructor has not yet been deprecated.

I'd love to hear any views on this.

Cheers

+1  A: 

That particular constrictor like all floating point operations is an approximation, it's not really broken it just has shortcommings. If you know your stuff approach with care and you wont get any surprises. Exactly the dame thing could be said of decimal literals being assigned to doubles/floats.

mP
Good point, it is like a landmine for unsuspecting developers. Landmines never served anyone in the long run.. which is why the JSR I guess...
Ryan Fernandes
+8  A: 

Considering the behavior of BigDecimal(double) is correct, in my opinion, I'm not too sure it really would be such a problem.

I wouldn't exactly agree with the wording of the documentation in the BigDecimal(double) constructor:

The results of this constructor can be somewhat unpredictable. One might assume that writing new BigDecimal(0.1) in Java creates a BigDecimal which is exactly equal to 0.1 (an unscaled value of 1, with a scale of 1), but it is actually equal to 0.1000000000000000055511151231257827021181583404541015625.

(Emphasis added.)

Rather than saying unpredictable, I think the wording should be unexpected, and even so, this would be unexpected behavior for those who are not aware of the limitations of representation of decimal numbers with floating point values.

As long as one keeps in mind that floating point values cannot represent all decimal values with precision, the value returned by using BigDecimal(0.1) being 0.1000000000000000055511151231257827021181583404541015625 actually makes sense.

If the BigDecimal object instantiated by the BigDecimal(double) constructor is consistent, then I would argue that the result is predictable.

My guess as to why the BigDecimal(double) constructor is not being deprecated is because the behavior can be considered correct, and as long as one knows how floating point representations work, the behavior of the constructor is not too surprising.

coobird
Great arguments! ">as long as one knows how floating point representations work" ... guess this sentence eliminates ~95% of the programming world (probably more). Preventing those 95% of programmers causing millions(?) of dollars in accounting errors might probably be what prompted that recommendation in JSR-13 :)
Ryan Fernandes
following that reasoning (not only) double and float must be removed ASAP
Carlos Heuberger
+1  A: 

Deprecation is deprecated. Parts of APIs are only marked deprecated in exceptional cases.

So, run FindBugs as part of your build process. FindBugs has a detector PlugIn API and is also open source (LGPL, IIRC).

Tom Hawtin - tackline
I appreciate the comment on deprecation. Thank you.
Ryan Fernandes
.. And after long consideration I feel that is the only possible answer - A policy from the Keepers of the API. Nothing else could completely explain this.
Ryan Fernandes
A: 

Saying Double.toString() gives different results than new BigDecimal(double) is one way to put it. I think it's more accurate to say the result given by Double.toString() is a lie, the BigDecimal constructor tells you the truth. I sympathize with the point that it is a landmine, but it's nice to have a way to see the actual value in the double.

Nathan Hughes