Most mathematicians agree that e ** (πi) + 1 = 0. However, most floating point implementations disagree. How well can we settle this dispute?

I'm keen to hear about different languages and implementations, and various methods to make the result as close to zero as possible. Be creative!

+3  A: 

Here's a short list of implementations and languages I've tried. It's sorted by closeness to zero:

  • Scheme: (+ 1 (make-polar 1 (atan 0 -1)))
    • 0.0+1.2246063538223773e-16i (Chez Scheme, MIT Scheme)
    • 0.0+1.22460635382238e-16i (Guile)
    • 0.0+1.22464679914735e-16i (Chicken with numbers egg)
    • 0.0+1.2246467991473532e-16i (MzScheme, SISC, Gauche, Gambit)
    • 0.0+1.2246467991473533e-16i (SCM)
  • Common Lisp: (1+ (exp (complex 0 pi)))
    • #C(0.0L0 -5.0165576136843360246L-20) (CLISP)
    • #C(0.0d0 1.2246063538223773d-16) (CMUCL)
    • #C(0.0d0 1.2246467991473532d-16) (SBCL)
  • Perl: use Math::Complex; Math::Complex->emake(1, pi) + 1
    • 1.22464679914735e-16i
  • Python: from cmath import exp, pi; exp(complex(0, pi)) + 1
    • 1.2246467991473532e-16j (CPython)
  • Ruby: require 'complex'; Complex::polar(1, Math::PI) + 1
    • Complex(0.0, 1.22464679914735e-16) (MRI)
    • Complex(0.0, 1.2246467991473532e-16) (JRuby)
  • R: complex(argument = pi) + 1
    • 0+1.224606353822377e-16i
Chris Jester-Young
+1 for all of the examples.
Ken Paul
+3  A: 

Is it possible to settle this dispute?

My first thought is to look to a symbolic language, like Maple. I don't think that counts as floating point though.

In fact, how does one represent i (or j for the engineers) in a conventional programming language?

Perhaps a better example is sin(π) = 0? (Or have I missed the point again?)

Ryan Fox
You are missing an end parenthesis in your Maple link.
@dalle: Agree, although it's a limitation of Markdown not easily overcome.
Chris Jester-Young

@Ryan: It is, in fact, very much like the question of whether sin π = 0. Again, no FP implementation I have here shows it to be 0, thus, I suppose the "dispute" cannot fully be resolved 100%. But since we're talking about FP, approximate solutions will have to do. :-P

Many programming languages have a complex-number class, that simply holds two doubles. To me, that's good enough, if the language also has facilities for doing (some sort of) overloaded operations on them (e.g., the use of cmath in Python is okay, despite not being strictly an overload).

I haven't used Maple, so I can't comment on that. :-P

Chris Jester-Young
+3  A: 

I agree with Ryan, you would need to move to another number representation system. The solution is outside the realm of floating point math because you need pi to represented as an infinitely long decimal so any limited precision scheme just isn't going to work (at least not without employing some kind of fudge-factor to make up the lost precision).

Dana the Sane
+2  A: 

@Ryan Fox

In fact, how does one represent i (or j for the engineers) in a conventional programming language?

Native complex data types are far from unknown. Fortran had it by the mid-sixties, and the OP exhibits a variety of other languages that support them in hist followup.

And complex numbers can be added to other languages as libraries (with operator overloading they even look just like native types in the code).

But unless you provide a special case for this problem, the "non-agreement" is just an expression of imprecise machine arithmetic, no? It's like complaining that

float r = 2/3;
float s = 3*r;
float t = s - 2;

ends with (t != 0) (At least if you use an dumb enough compiler)...

+3  A: 

Your question seems a little odd to me, as you seem to be suggesting that the Floating Point math is implemented by the language. That's generally not true, as the FP math is done using a floating point processor in hardware. But software or hardware, floating point will always be inaccurate. That's just how floats work.

If you need better precision you need to use a different number representation. Just like if you're doing integer math on numbers that don't fit in an int or long. Some languages have libraries for that built in (I know java has BigInteger and BigDecimal), but you'd have to explicitly use those libraries instead of native types, and the performance would be (sometimes significantly) worse than if you used floats.

+1  A: 

In fact, how does one represent i (or j for the engineers) in a conventional programming language?

In a language that doesn't have a native representation, it is usually added using OOP to create a Complex class to represent i and j, with operator overloading to properly deal with operations involving other Complex numbers and or other number primitives native to the language.

Eg:, C++ < complex >


Excellent answers all; thank you. (I believe I've upmodded every answer here so far.)

@Dana: Bingo! My question indeed is to see which implementations employ some sort of fudge factor to make the answer closer to 0 than others. One way to do this, for example, is to have a special quantity π, much like many languages have a special quantity i for complex numbers; numbers are then stored in terms of multiples of 1, i, and π, as appropriate.

Of course this probably shouldn't be built into the central language, but a library module to do this transparently would be nice. And it will make a variety of trigonometric operations more precise (if special versions are provided by said library module).

Chris Jester-Young
Update: I've unaccepted Dana's answer in favour of lassevk's one; apologies.
Chris Jester-Young
+8  A: 

It's not that most floating point implementations disagree, it's just that they cannot get the accuracy necessary to get a 100% answer. And the correct answer is that they can't.

PI is an infinite series of digits that nobody has been able to denote by anything other than a symbolic representation, and e^X is the same, and thus the only way to get to 100% accuracy is to go symbolic.

Lasse V. Karlsen
Yes, I totally agree with the "going symbolic" route, that if you are doing serious trig, then numbers should be maintained with coefficients of 1, i, and π (and e, in some cases). I'm not enough of a mathematician to say if more cases need covering. :-)
Chris Jester-Young
Your point elaborates my point (see my comment dated Aug 25) better than other responses, so I'm going to accept your answer as "best".
Chris Jester-Young
+1  A: 

Numerical Analysis teaches us that you can't rely on the precise value of small differences between large numbers.

This doesn't just affect the equation in question here, but can bring instability to everything from solving a near-singular set of simultaneous equations, through finding the zeros of polynomials, to evaluating log(~1) or exp(~0) (I have even seen special functions for evaluating log(x+1) and (exp(x)-1) to get round this).

I would encourage you not to think in terms of zeroing the difference -- you can't -- but rather in doing the associated calculations in such a way as to ensure the minimum error.

I'm sorry, it's 43 years since I had this drummed into me at uni, and even if I could remember the references, I'm sure there's better stuff around now. I suggest this as a starting point.

If that sounds a bit patronising, I apologise. My "Numerical Analysis 101" was part of my Chemistry course, as there wasn't much CS in those days. I don't really have a feel for the place/importance numerical analysis has in a modern CS course.

Yes, those functions are called expm1 and log1p, and I totally agree with using them when the quantities are small enough to make a difference. My question was more of a joke question, but it does lead to some serious topics; thanks for expanding on them.
Chris Jester-Young
+1  A: 

It's a limitation of our current floating point computational architectures. Floating point arithmetic is only an approximation of numeric poles like e or pi (or anything beyond the precision your bits allow). I really enjoy these numbers because they defy classification, and appear to have greater entropy(?) than even primes, which are a canonical series. A ratio defy's numerical representation, sometimes simple things like that can blow a person's mind (I love it).

Luckily entire languages and libraries can be dedicated to precision trigonometric functions by using notational concepts (similar to those described by Lasse V. Karlsen ).

Consider a library/language that describes concepts like e and pi in a form that a machine can understand. Does a machine have any notion of what a perfect circle is? Probably not, but we can create an object - circle that satisfies all the known features we attribute to it (constant radius, relationship of radius to circumference is 2*pi*r = C). An object like pi is only described by the aforementioned ratio. r & C can be numeric objects described by whatever precision you want to give them. e can be defined "as the e is the unique real number such that the value of the derivative (slope of the tangent line) of the function f(x) = ex at the point x = 0 is exactly 1" from wikipedia.

Fun question.

Mark Essel