The Mantissa portion (the part after the decimal point) of Floating point numbers are stored as a sum of fractions. They are calcualted by adding a series of fractions. The order of the fractions is:
1/2, 1/4, 1/8, 1/16, 1/32, 1/64, 1/128, ... etc
The binary representation is stored as 0s and 1s which indicate yes/no. For example, 001010 would be 0 * 1/2 + 0 * 1/4 + 1 * 1/8 + 0 * 1/16 + 1 * 1/32.
This is a rough example of why floating points cannot be exact. As you add precision (float -> double -> long double) you get more precision to a limit.
The underlying binary data stored is split into two pieces - one for the part that appears before the decimal point, and the other for the part after the decimal. It's an IEEE standard that has been adopted because of the speed at which calculations can be performed (and probably other factors on top).
Check this link for more information:
http://docs.sun.com/source/806-3568/ncg_goldberg.html