b0lt has already explained how bias works. At a guess, perhaps you'd like to know why they use a bias representation here, even though virtually all modern computers use two's completely essentially everywhere else (and even machines that don't use two's complement, use one's complement or sign-magnitude, not bias).
One of the goals of the IEEE floating point standards was that you could treat the bits of a floating point number as a (signed) integer of the same size, and if you compared them that way, the values will sort into the same order as the floating point numbers they represented.
If you used a twos-complement representation for the exponent, a small positive number (i.e., with a negative exponent) would look like a very large integer because the second MSB would be set. By using a bias representation instead, you don't run into that -- a smaller exponent in the floating point number always looks like a smaller integer.
FWIW, this is also why floating point numbers are typically arranged with the sign first, then the exponent, and finally the significand in the least significant bits -- this way, if you view those bits as an integer, the exponent is treated as more significant than the significand, so you don't get (for example) 0.9 sorting larger than 1.0.