I find it hard (but not impossible) to believe that any other check on args[i]
would be faster than double.IsNan()
.
One possibility is if this is a function. There is an overhead with calling functions, sometimes substantial, especially if the function itself is relatively small.
You could take advantage of the fact that the bit patterns for IEEE754 NaNs are well known and just do some bit checks (without calling a function to do it) - this would remove that overhead. In C, I'd try that with a macro. Where the exponent bits are all 1 and the mantissa bits are not all 0, that's a NaN (signalling or quiet is decided by the sign bit but you're probably not concerned with that). In addition, NaNs are never equal to one another so you could test for equality between args[i]
and itself - false means it's a NaN.
Another possibility may be workable if the array is used more often than it's changed. Maintain another array of booleans which indicate whether or not the associated double is a NaN. Then, whenever one of the doubles changes, compute the associated boolean.
Then your function becomes:
public void DoSomething(double[] args, boolean[] nan) {
for(int i = 0; i < args.Length; i++) {
if (nan[i]) {
//Do something
}
}
}
This is the same sort of "trick" used in databases where you pre-compute values only when the data changes rather than every time you read it out. If you're in a situation where the data is being used a lot more than being changed, it's a good optimisation to look into (most algorithms can trade off space for time).
But remember the optimisation mantra: Measure, don't guess!