views:

589

answers:

4

Yeah, I meant to say 80-bit. That's not a typo...

My experience with floating point variables has always involved 4-byte multiples, like singles (32 bit), doubles (64 bit), and long doubles (which I've seen refered to as either 96-bit or 128-bit). That's why I was a bit confused when I came across an 80-bit extended precision data type while I was working on some code to read and write to AIFF (Audio Interchange File Format) files: an extended precision variable was chosen to store the sampling rate of the audio track.

When I skimmed through Wikipedia, I found the link above along with a brief mention of 80-bit formats in the IEEE 754-1985 standard summary (but not in the IEEE 754-2008 standard summary). It appears that on certain architectures "extended" and "long double" are synonymous.

One thing I haven't come across are specific applications that make use of extended precision data types (except for, of course, AIFF file sampling rates). This led me to wonder:

  • Has anyone come across a situation where extended precision was necessary/beneficial for some programming application?
  • What are the benefits of an 80-bit floating point number, other than the obvious "it's a little more precision than a double but fewer bytes than most implementations of a long double"?
  • Is its applicability waning?
A: 

I have a friend that is working in that. He is working on a library to handle floating points of the size of gigabytes. Of course, is something related with scientific computing(calculations with plasma), and probably only this kind of computing works with numbers this big...

Diones
+2  A: 

Wikipedia explains that an 80-bit format can represent an entire 64-bit integer without losing information. Thus the floating-point unit of the CPU can be used to implement multiplication and division for integers.

Nathan Kitchen
I see, so an 80-bit FPU can pull double-duty for up to 64-bit integer arithmetic. Cool.
gnovice
+7  A: 

Intel's FPUs use the 80-bit format internally to get more precision for intermediate results.

That is, you may have 32-bit or 64-bit variables, but when they are loaded into the FPU registers, they are converted to 80 bit; the FPU then (by default) performs all calculations in 80 but; after the calculation, the result is stored back into a 32-bit or 64-bit variables.

BTW - A somewhat unfortunate consequence of this is that debug and release builds may produce slightly different results: in the release build, the optimizer may keep an intermediate variable in an 80-bit FPU register, while in the debug build, it will be stored in a 64-bit variable, causing loss of precision. You can avoid this by using 80-bit variables, or use an FPU switch (or compiler option) to perform all calculations in 64 bit.

oefe
Sounds like one of those "side effects" involving "subtle differences in the behaviour of the arithmetic" that the Wikipedia page mentions. =) So, since the IEEE 754-2008 specs mention 128-bit "quad" formats, should we expect 80-bit FPUs to get phased out soon?
gnovice
I don't know where the standard is heading, but I would expect that at least Intel will keep 80-bit support for a long time to come to maintain compatibility, even if they add 128-bit support.
oefe
A: 

I used 80-bit for some pure math research. I had to sum terms in an infinite series that grew quite large, outside the range of doubles. Convergence and accuracy weren't concerns, just the ability to handle large exponents like 1E1000. Perhaps some clever algebra could have simplified things, but it was way quicker and easier to just code an algorithm with extended precision, than to spend any time thinking about it.

DarenW