Should I use decimal or float to store a ratio in a database? Particularly in SQL2005.
That depends on what your need for accuracy is. If you can tolerate the typical errors that come from the IEEE method of storing floating point numbers, then use a float, otherwise, use a decimal if you need an exact representation (and that goes for any numbers that are not integers that you will use in calculations using the percentage as well).
Depends how exact you want it to be. If you need a 1 or 2 digits, and you know the maximum ratio is going to be under maxint/1000, I'd think about storing the ratio mulitplied by 100 in an int. If you need exact numbers, you might even want to store the numerator and denominator as separate ints. Otherwise, stick to floats or doubles.
I never use floats in a database, maybe it's an old habit that technology has addressed, I'm not 100% sure.
Either ints, scaled ints or decimals. There are times when a rounding error seems insignificant but it could fail matches on certain values or introduce cumulative errors.
It depends what your ratio is, really.
For interest rates, for instance, in banking, we always use decimal, with a precision and scale determined by the business. If an interest rate is to be calculated and the result is beyond the precision of the column, then there is always a business rule which defines how the result is to be rounded or truncated to fit into the column.
If your ratio is, for example, the ratio between an object's diameter and it's circumference, I would probably use a float.
Depends on your application. If you're using floats to calculate everything to do with this ratio already, it would probably make sense to store it as a float.
But otherwise, floats are in general a bit of a database smell. Database admins don't like them because the mounting inaccuracies mean that floating point numbers are inherently inconsistent. And that falls foul of the 'C'onsistency in our beloved ACID.
Decimals and scaled ints at least behave predictably.