views:

189

answers:

1

let's say, for the float type in c, according to the IEEE floating point specification, there are 8-bit used for the fraction filed, and it is calculated as first taken these 8-bit and translated it into an unsigned number, and then minus the BIASE, which is 2^7 - 1 = 127, and the result is an exponent ranges from -127 to 128, inclusive. But why can't we just treat these 8-bit pattern as a signed number, since the resulting range is [-128,127], which is almost the same as the previous one.

+8  A: 

The purpose of the bias is so that the exponent is stored in unsigned form, making it easier to do comparisons. From Wikipedia:

By arranging the fields so the sign bit is in the most significant bit position, the biased exponent in the middle, then the mantissa in the least significant bits, the resulting value will ordered properly, whether it's interpreted as a floating point or integer value. This allows high speed comparisons of floating point numbers using fixed point hardware.

So basically, a floating point number is:

[sign] [unsigned exponent (aka exponent + bias)] [mantissa]

This website provides excellent information about why this is good - specifically, compare the implementations of floating point comparison functions.

Also, no complete answer about floating point oddities can go without mentioning "What Every Computer Scientist Should Know About Floating-Point Arithmetic." It's long, dense and a bit heavy on the math, but it's long dense mathematical gold (or something like that).

Daniel G
Sounds like a poor electronic engineer's excuse to me...
Pavel Radzivilovsky
@Pavel, there are still processors with no FPU, f.e. ARMs. Making at least this operation easy to do on integers makes emulating FPU faster.
liori
@liori recent ARM processors do have FPUs[1], but, @Pavel, when the IEEE-754 floating point specification was first agreed to in 1985, most processors *didn't*. For example, the 8086 - the original x86! - only had an FPU via an optional extension processor, the 8087 "x87". The FPU wasn't included until the 80486, which was introduced in 1989. [2][1] http://www.arm.com/products/processors/technologies/vector-floating-point.php[2] http://en.wikipedia.org/wiki/Floating-point_unit#Add-on_FPUs
Daniel G
@Daniel G: You could even get a 486 without an onboard FPU. The 486 SX was a marketing ploy in which they disabled the FPU portion of the chip and sold it for cheaper than the full-featured 486 (which they redubbed 486 DX). And the first Pentium had an onboard FPU, but you didn't want it. ;)
P Daddy
@liori, I think it would be easier with two's complement.
Pavel Radzivilovsky
@Pavel Radzivilosvsky: No, comparing two FP numbers would be *harder* if the exponent were in two's complement. If the exponent were in two's complement, then the integer representations of FP numbers between zero and one (exclusive) would compare greater than the integer representations of numbers greater than or equal to one.
P Daddy
@Daniel G: "recent" is relative. My two-year-old smartphone has FPU-less ARM processor...
liori