I'm writing code for a neural network, and I'm wondering whether I should do it or not. I'm actually somewhat worried that even using double might not wield good results and that I might have to migrate my code to a more efficient language, like c++. I've read in a question here that BigDecimal is 1000 times slower than double? That's a lot.
On the other hand, I'm going to be working a lot with decimal numbers and having it be more precise would always be good. I can't really tell if the precision could cause problems to it, either. I don't think any of the implementations I've seen around do it either, so I'm probably not gonna do it. Although sometimes the network doesn't behave as it should; whether that's a precision error or a problem with its logic, I'm not sure.
But I'm wondering, do you guys only use BigDecimal when dealing with money? Any thoughts about this?