views:

218

answers:

1

Hello,

I'm looking to compute the nth digit of Pi in a low-memory environment. As I don't have decimals available to me, this integer-only BBP algorithm in Python has been a great starting point. I only need to calculate one digit of Pi at a time. How can I determine the lowest I can set D, the "number of digits of working precision"?

D=4 gives me many correct digits, but a few digits will be off by one. For example, computing digit 393 with precision of 4 gives me 0xafda, from which I extract the digit 0xa. However, the correct digit is 0xb.

No matter how high I set D, it seems that testing a sufficient number of digits finds an one where the formula returns an incorrect value.

I've tried upping the precision when the digit is "close" to another, e.g. 0x3fff or 0x1000, but cannot find any good definition of "close"; for instance, calculating at digit 9798 gives me 0xcde6 , which is not very close to 0xd000, but the correct digit is 0xd.

Can anyone help me figure out how much working precision is needed to calculate a given digit using this algorithm?

Thank you,

edit
For Reference:

precision (D)   first wrong digit
-------------   ------------------
3               27
4               161
5               733
6               4329
7               21139
8+              ???

Note that I am calculating one digit at a time, e.g.:


for i in range(1,n):
    D = 3 # or whatever precision I'm testing
    digit = pi(i) # extracts most significant digit from integer-only BBP result
    if( digit != HARDCODED_PI[i] ):
        print("non matching digit #%d, got %x instead of %x" % (i,digit,HARDCODED_PI[i]) )
+1  A: 

No matter how high I set D, it seems that testing a sufficient number of digits finds an one where the formula returns an incorrect value.

You will always get an error if you are testing a sufficient number of digits - the algorithm does not use arbitrary precision, so rounding errors will show up eventually.

The unbounded iteration with break when the digit doesn't change is going to be difficult to determine the minimum precision required for a given number of digits.

Your best bet is to determine it empirically, ideally by comparing against a known correct source, and increasing the number of digits precision until you get match, or if a correct source is not available, start with your maximum precision (which I guess is 14, since the 15th digit will almost always contain a rounding error.)

EDIT: To be more precise, the algorithm includes a loop - from 0..n, where n is the digit to compute. Each iteration of the loop will introduce a certain amount of error. After looping a sufficient number of times, the error will encroach into the most significant digit that you are computing, and so the result will be wrong.

The wikipedia article uses 14 digits of precision, and this is sufficient to correctly compute the 10**8 digit. As you've shown, fewer digits of precision leads to errors occuring earlier, as there is less precision and error becomes visible with fewer iterations. The net result is that the value for n for which we can correctly compute a digit becomes lower with fewer digits of precision.

If you have D hex digits of precision, that's D*4 bits. With each iteration, an error of 0.5bits is introduced in the least significant bit, so with 2 iterations there is a chance the LSB is wrong. During summation, these errors are added, and so are accumulated. If the number of errors summed reaches the LSB in the most significant digit, then the single digit you extract will be wrong. Roughly speaking, that is when N > 2**(D-0.75). (Correct to some logarithmic base.)

Empirically extrapolating your data, it seems an approximate fit is N=~(2**(2.05*D)), although there are few datapoints so this may not be an accurate predictor.

The BBP algorithm you've chosen is iterative, and so it will take progressively longer to compute digits in the sequence. To compute digits 0..n, will take O(n^2) steps.

The wikipedia article gives a formula for calculating the n'th digit that doesn't require iteration, just exponentiation and rational numbers. This will not suffer the same loss of precision as the iterative algorithm and you can compute any digit of pi as needed in constant time (or at worst logarithmic type, depending upon the implementation of exponentiation with modulus), so computing n digits will take O(n) time possibly O(n log n).

mdma
Although I'm testing many digits, I'm calculating each digit one at a time. Are you saying that there's no way to know how much precision is needed to get one correct digit at a given location?
brainfsck
@brainfsck: you could certainly use **extrapolation** on the data you already have... it might not be easy though.
ANeves
I'm just looking into this now, to see if I can explain where the rounding error is happening. But please be aware that the script you are using is not intended to produce sequential digits - it loops from 0..n - so computing the n'th digit takes time proportional to n, which is far from ideal. The wikipedia page has further down a true spigot algorithm for generating digits one by one - can you use that?
mdma
Thanks for your help...I chose the BBP algorithm because time was not an issue, only memory.
brainfsck
You can still use the BBP formula - just a different implementation - one that doesn't rely on iteration. The python code further down the page on wikipedia doesn't use iteration, so won't suffer from the rounding errors.
mdma