Does anyone know how to find out the precision of long double
on a specific platform? I appear to be losing precision after 17 decimal digits, which is the same as when I just use double
. I would expect to get more, since double
is represented with 8 bytes on my platform, while long double
is 12 bytes.
Before you ask, this is for Project Euler, so yes I do need more than 17 digits. :)
EDIT: Thanks for the quick replies. I just confirmed that I can only get 18 decimal digits by using long double
on my system.