views:

133

answers:

6

I'm writing code for a neural network, and I'm wondering whether I should do it or not. I'm actually somewhat worried that even using double might not wield good results and that I might have to migrate my code to a more efficient language, like c++. I've read in a question here that BigDecimal is 1000 times slower than double? That's a lot.

On the other hand, I'm going to be working a lot with decimal numbers and having it be more precise would always be good. I can't really tell if the precision could cause problems to it, either. I don't think any of the implementations I've seen around do it either, so I'm probably not gonna do it. Although sometimes the network doesn't behave as it should; whether that's a precision error or a problem with its logic, I'm not sure.

But I'm wondering, do you guys only use BigDecimal when dealing with money? Any thoughts about this?

A: 

I use integers/longs when dealing with money, because using any sort of decimal representation is absurd. You should DEFINITELY not use doubles, and there are some money handling libraries out there you may want to look at.

As I recall, however, the money libraries are immature or underdeveloped.

Stefan Kendall
this only works well when you have a constant number of decimal places for your monetary amounts (which, for some usages, might be always)
matt b
What do you mean, using decimal is absurd? That is what money is! Decimal libraries just use longs internally and handle the decimal place adjustment for you... it's the same as what you would have to do manually if you use longs! No reason not to use Decimal except for performance.
JoelFan
A: 

Integers for whole and fractional values, combined with Currency, are the way to go. Either find a library or write your own.

duffymo
A: 

You should absolutely not use floating point decimal numbers for fixed point amounts - such as currency.

In the past I've used a custom Money class which merely wraps a BigDecimal instance - it has worked well and has no issues.

matt b
A: 

Do your own benchmarks and decide based on that.... it means nothing "what people say".

JoelFan
+2  A: 

Using Java's double data type for weights in a neural network seems very appropriate. It is a good choice for engineering and scientific applications.

Neural networks are inherently approximate. The precision of BigDecimal would be meaningless in this application, performance impact aside. Reserve BigDecimal primarily for financial applications.

erickson
A: 

People don't just use BigDecimal / BigInteger for money. Rather, they use them in applications that need more precision than is available using double or long.

Of course, using BigDecimal and BigInteger comes at the cost of much slower arithmetical operations. For example, big number addition is O(N) where N is the number of significant digits in the number, and multiplication is O(N**2).

So the way to decide whether to use long / double or their "big" analogs is to look at how much precision your application really needs. Money applications really do need to be able to represent values without losing a single cent. Other applications are equally sensitive to precision.

But frankly, I don't think that a neural network application needs 13 decimal digits of precision. The reason your network is not behaving as it should is probably nothing to do with precision. IMO, it is more likely related to the fact that "real" neural networks don't always behave the way that they should.

Stephen C
Don't use double for money, ever! There is never a good reason for that!
JoelFan