views:

361

answers:

6

So I know about floating point precision (and how things like 1.1 can't be expressed exactly in binary) and all that, but I'm wondering: how then, do math-related libraries implement infinite precision? In other words, how would you represent, for example, 1.1 accurately in binary? Just a brief description would be great, I can figure out the exact details myself. Thanks. :)

+1  A: 

Mathematical libraries with infinite precision are not implemented. It cannot be done. The number 1/3 cannot be represented in a finite number of bits other than as a fraction. Transcendent numbers like pi and e cannot be represented completely in any fashion.

On the other hand, it is possible to create math libraries with huge precision. It's all a matter of allocating enough bits for the mantissa of the floating point value.

paxdiablo
Sorry, I don't know the terminology, I just used "infinite precision" because that's what it seems like.
musicfreak
So all they do is just add more precision? That's it? Throw more bits at it?
musicfreak
By definition, you can't tell that infinite precision isn't implemented; you can't complete the evaluation! :-)
McWafflestix
No, not with an IEEE754 double (~53 bits for mantissa) - you get 0.100000000000000005551115123125782702118158. Of course, that's an error of 1 in 2x10^16 so it's close enough for most purposes.
paxdiablo
@musicfreak, that's exactly what they do (throw more bits at it). IEE754 doubles have ~53 bits of mantissa which is the bit that affects precision. That still won't help with some numbers, and no amount of extra bits will allow pi or e to be represented correctly. But, it matters less than you think. Even PI at 353/113 (?) is accurate enough to calculate the circumference of the planet to within the length of a car. 0.1 as a double has an error of 5 parts per one hundred million billion (or something like that).
paxdiablo
Okay, that makes sense. So I guess what I'm looking for is a fraction library, then. ;) Thanks.
musicfreak
irrational numbers "can" be represented - you just did ("pi" and "e") :)
Jeffrey Kemp
@pax: The approximation to pi you were looking for is 355/113, which has a relative error 10^5 times smaller
Novelocrat
+4  A: 

There are no infinite precision libraries, but there are arbitrary precision libraries. For details of how these are implemented, read some documentation :-)

To represent 1.1 exactly in binary, floating point can't be used as you correctly point out. It can be represented if you store the integral part (1) as an integer, and the fractional part (.1) as another integer, and then you need to create the logic to deal with these structures. Alternatively, it could be stored as a fraction (11/10) with both the denominator and the numerator stored as integers.

David Johnstone
Ah, that's what I was looking for. Thanks. :) One more thing though: how efficient (or inefficient) would such a "fraction" library be compared to just using floats?
musicfreak
Fractions aren't too bad but they're still limited in precision based on the type of the numerator and denominator. If you're going to go to the effort of doing big decimals then layer code to handle fractions (GCD, LCF and so on) on top of that, you may as well just go for huge-precision floats. And fractions still have the problem of not being able to represent transcendentals.
paxdiablo
They aren't as efficient as floats provided your CPU was designed to work with floats. The CPU can just multiply two floating point numbers, whereas it would have to do a couple of multiplications and possibly a GCD if you multiply two fractions.
David Johnstone
I'm not going to be working with huge, complex (in the English sense) numbers, I just need some way to represent relatively small fractions as, well, fractions. I thought there was a way to do this with floats but you guys proved me wrong. Thanks for the help!
musicfreak
You *can* do it with floats, @musicfreak, you just have to watch out for equalities: so, instead of "if (x == 0.1) ...", you use "if (fabs (x - 0.1) < 1e-6) ...". Then the inconsequential errors don't affect your code.
paxdiablo
Well, I meant *easily* do it. ;)
musicfreak
Some arbitrary precision libraries store numbers in base 10 rather than base 2 for this very reason. It's hard to know how fast or slow something is without testing it specifically, but as a general rule, modern processors do math so fast that the only thing that really matters is memory access patterns.
Jason Watkins
A: 

While Pax is totally right here when we're talking about floating points and numbers, I believe there is a solution but it's very inefficient.
You can use a string to represent your number, strings do not loss precision.
Whenever you have a number like "0.0001" + "0.1" you iterate both string and convert only the current position to an int.
Step 1:
0 + 0 = 0 -> convert to string and assign to data[0].
Step 2:
0 + 1 = 1 -> convert to string and assign to data[1].
Step 3:
iter > "0.1".lenght() -> stop.

the_drow
You will still lose precision (although it will take longer) because of limited memory available. (1/3)*3, even with strings, will result in a precision loss. You may as well allocate that "limited" memory to a IEE754 2G (12gigabit) floating point value. Calculations would be faster and you'd get better precision (although you could only hold one float in memory at a time :-)
paxdiablo
Well if he needs infinite numbers you can use a geometric sequence to get all of the numbers. You keep them in a string.I assumed this is a top notch computer since he needs something very special.
the_drow
(1/3)*3 doesn't have to lose precision if the 1/3 is kept as a numerator and denominator (which is what some libraries do). What you have is (1/3) * (3/1) and the library multiplies the numerators together and the denominators together and gets 3/3 which goes to 1. The parens don't matter because the {numerator,denominator} pair is considered to be a number. Of course, eventually you have to resolve something like this: (1/3)+1. And at that point you have to make a decision on how many digits to keep around.
Nosredna
Of course you could store (1/3)+1 as 4/3 and further skirt the issue.
Rick Regan
Yes, you can. But eventually you may hit something that forces an evaluation, or need to print an answer to some given amount of precision.
Nosredna
+1  A: 

There are certain geometric algorithms that depend on exact arithmetic, so if you look in the CGAL library you will find a variety of exact numeric types that are "closed" under various operations. That is, there is no way to use the supported operations to produce a result that can't be represented exactly.

Some examples:

  • Integers are closed under addition and multiplication.

  • Rationals are also closed under division, except for a special case for zero. Can be represented as a pair of integers. See also the rational number functions in GMP. eg 1.1 = 11/10, can be represented as (11, 10).

  • A number type that is also closed under square root.

Paul Harrison
A certain range of integers may be representable within a computer, but not *all* of them (storage limitations). My friends in the maths and physics departments at the local Uni would flay me alive if I made a statement like that :-)
paxdiablo
You are being pedantic. Everybody knows that, but it would be too tedious to reiterate "up to limited memory" every time we define a data type.
starblue
It's not pedantic at all and not *everyone* knows it, hence this and many other questions. SO isn't just for people who understand the limits of machine representation of integers or floats, it's for all people. Your claim that addition on integers has "no way to ... produce a result that can't be represented exactly" only makes sense if you specify a restricted range of integers. I'm not saying they're not workable (floats and doubles are workable for most purposes), just that numbers like the square root of two don't become exact simply by throwing a finite number of digits at them.
paxdiablo
Sorry, I have mathematics on the brain a bit at the moment. The statement that integers are closed under addition is perfectly correct as a mathematical statement, but of course a computer might be limited to, say, 2 to the power of ten billion or so. The square root of two can be represented to infinite accuracy as follows: "the square root of 2". This is more or less what CGAL does.
Paul Harrison
+2  A: 

If you really mean infinite precision there are two options:

  • Use some form of lazy computation. Then you can ask a number for as much precision as you like "after" executing the computation (since it's lazy it actually only done then). The drawback is that this is very inefficient. You can do this in a language like Haskell using a special number system where representations overlap, e.g. base 2 with digits -1, 0, 1. The usual representation is unsuitable, because at say 1 you need infinite precision to decide between outputting 0 for 0.999... and 1 for 1.000...

  • Do computation symbolically. Represent integers, rationals, roots, etc exactly. This is needed if you want to decide equality, but also rather inefficient and limited to special cases.

starblue
Symbolic computation may be inefficient for number crunching, but it is exactly what computer algebra systems (like Maple, Mathematica, Reduce, Axiom, ...) do.
Jochen Walter
+1  A: 

You could also represent numbers in decimal and do decimal arithmetic. The underlying representation is binary in the sense that each digit is represented with a binary code. Each digit -- whether to the left or right of the decimal point -- is treated as an integer. Arithmetic is then done ``manually'', digit-by-digit.

One example of a decimal-based library is BCMath in PHP.

Rick Regan