Due to the difficulty for machine to represent floating point values exactly, we are using the a technique from Write Great Code: Understanding the machine to perform floating point comparisons:
from the editor: please insert your code here. See HTML comment in the question source
Currently, we hard coded the 'error' value. But the error is different across different machine. Is there any good way to figure out, the error for a particular machine, instead of hard-coding a tolerance?