let's say, for the float type in c, according to the IEEE floating point specification, there are 8-bit used for the fraction filed, and it is calculated as first taken these 8-bit and translated it into an unsigned number, and then minus the BIASE, which is 2^7 - 1 = 127, and the result is an exponent ranges from -127 to 128, inclusive. But why can't we just treat these 8-bit pattern as a signed number, since the resulting range is [-128,127], which is almost the same as the previous one.
The purpose of the bias is so that the exponent is stored in unsigned form, making it easier to do comparisons. From Wikipedia:
By arranging the fields so the sign bit is in the most significant bit position, the biased exponent in the middle, then the mantissa in the least significant bits, the resulting value will ordered properly, whether it's interpreted as a floating point or integer value. This allows high speed comparisons of floating point numbers using fixed point hardware.
So basically, a floating point number is:
[sign] [unsigned exponent (aka exponent + bias)] [mantissa]
This website provides excellent information about why this is good - specifically, compare the implementations of floating point comparison functions.
Also, no complete answer about floating point oddities can go without mentioning "What Every Computer Scientist Should Know About Floating-Point Arithmetic." It's long, dense and a bit heavy on the math, but it's long dense mathematical gold (or something like that).