I'm thinking floats. For the record I'm also using NHibernate.
it depends on how accurate you need. If you dont need any decimal places you could even use tiny int. But generally floats are good if you need some decimal places. I use LLBLGen and floats for %'s
It depends on what you are using them for.
if it's for display, an int would do just fine, and be faster than a float. If it's for infrequent mathematics, an int would still do fine (or even a short, in either case).
If you're doing a lot of math, then a float would probably be best, performance-wise.
Of course, unless you're doing a LOT of manipulation of the percentages, it won't really matter in the end, performance-wise, given modern processor speed.
EDIT: Of course, 'int' assumes you are just using strict, whole-number percents. If you aren't, you'd ALWAYS be better with float or dec.
With regard to SQL Server, it's rare that I'd store a percentage. 9 times out of 10 you want to store the data that is used to derive the percentage and calculate as needed.
If and only if you have empirical data showing the calculation is too slow, then go ahead store it. Use a decimal as previously suggested to avoid rounding issues.
It largely depends on how much precision you need; The most important thing is to be consistent and clear. Take precautions to ensure that you are consistent across the field's use.. i.e. don't store it as a float (n = .5) and then try to reconstitute it as if it were an integer in another part of your code (mypercentage = n/100). As long as you make sure not to do that and you are not building a laser that requires 30 significant digits of precision, just pick your favorite flavor between int, double, or whatever floats your boat. waka-waka.
The answer is application-dependent.
Others have pointed out that decimal is better than float for representing an exact value. But sometimes it's better to use a float for the increased precision (e.g. a set of calculated weights that add up to 100% is probably better represented as a float)
Floats. Decimal loses precision due to rounding as proved fairly conclusively by Jeff here: http://www.sqlservercentral.com/Forums/Topic522397-360-1.aspx
Use floats, and output to a set number of decimal places as required.