I have a bunch of floating point numbers (Java doubles), most of which are very close to 1, and I need to multiply them together as part of a larger calculation. I need to do this a lot.
The problem is that while Java doubles have no problem with a number like:
0.0000000000000000000000000000000001 (1.0E-34)
they can't represent something like:
1.0000000000000000000000000000000001
Consequently of this I lose precision rapidly (the limit seems to be around 1.000000000000001 for Java's doubles).
I've considered just storing the numbers with 1 subtracted, so for example 1.0001 would be stored as 0.0001 - but the problem is that to multiply them together again I have to add 1 and at this point I lose precision.
To address this I could use BigDecimals to perform the calculation (convert to BigDecimal, add 1.0, then multiply), and then convert back to doubles afterwards, but I have serious concerns about the performance implications of this.
Can anyone see a way to do this that avoids using BigDecimal?
Edit for clarity: This is for a large-scale collaborative filter, which employs a gradient descent optimization algorithm. Accuracy is an issue because often the collaborative filter is dealing with very small numbers (such as the probability of a person clicking on an ad for a product, which may be 1 in 1000, or 1 in 10000).
Speed is an issue because the collaborative filter must be trained on tens of millions of data points, if not more.