Why do comparisons of NaN values behave differently from all other values? That is, all comparisons with the operators ==, <=, >=, <, > where one or both values is NaN returns false, contrary to the behaviour of all other values.
I suppose this simplifies numerical computations in some way, but I couldn't find an explicitly stated reason, not even in the Lecture Notes on the Status of IEEE 754 by Kahan which discusses other design decisions in detail.
This deviant behavior is causing trouble when doing simple data processing. For example, when sorting a list of records w.r.t. some real-valued field in a C program I need to write extra code to handle NaN as the maximal element, otherwise the sort algorithm could become confused.
Edit: The answers so far all argue that it is meaningless to compare NaNs.
I agree, but that doesn't mean that the correct answer is false, rather it would be a Not-a-Boolean (NaB), which fortunately doesn't exist.
So the choice of returning true or false for comparisons is in my view arbitrary, and for general data processing it would be advantageous if it obeyed the usual laws (reflexivity of ==, trichotomy of <, ==, >), lest data structures which rely on these laws become confused.
So I'm asking for some concrete advantage of breaking these laws, not just philosophical reasoning.
Edit 2: I think I understand now why making NaN maximal would be a bad idea, it would mess up the computation of upper limits.
NaN != NaN might be desirable to avoid detecting convergence in a loop such as
while (x != oldX) {
oldX = x;
x = better_approximation(x);
}
which however should better be written by comparing the absolute difference with a small limit. So IMHO this is a relatively weak argument for breaking reflexivity at NaN.