Your answer is on that same page you linked:
For FLOAT data type, the n is the number of bits used to store the mantissa in scientific notation
24 bits of mantissa gives you (approximately) 7 decimal digits of precision (because 2^24 ~= 10^7).
edit to add:
Notice that everyone keeps saying 'approximately' - this is for a reason :)
Binary floating point numbers and decimal literals do not necessarily play together in an intuitive manner. For background read What Every Computer Scientist Should Know About Floating-Point Arithmetic. Also note that saying 'approximately 7 decimal digits of precision' is not incompatible with being able to store a value with more than 7 significant figures! It does mean however that this datatype will be unable to distinguish between 0.180000082 and 0.180000083, for example, because it isn't actually storing the exact value of either:
declare @f1 real
declare @f2 real
set @f1 = 0.180000082
set @f2 = 0.180000083
select @f1, @f2
select @f1 - @f2
------------- -------------
0.1800001 0.1800001
(1 row(s) affected)
-------------
0
(1 row(s) affected)
The fact is that real
is the same as float(24)
, a binary floating point number with 24 bits of mantissa, and I don't believe there's a way to change this. Floating-point types are in general not a good choice if you want to store exact decimal quantities.