Did you know that on x86 processors it's more efficient to do x ^= x
where x is a 32-bit integer, than it is to do x = 0
? It's true, and of course has the same result. Hence any time one can see x = 0
in code, one can replace it with x ^= x
and gain efficiency.
Now, have you ever seen x ^= x
in much code?
The reason you haven't is not just because the efficiency gain is slight, but because this is precisely the sort of change that a compiler (if compiling to native code) or jitter (if compiling IL or similar) will make. Disassemble some x86 code and its not unusual to see the assembly equivalent of x ^= x
, though the code that was compiled to do this almost certainly had x = 0
or perhaps something much more complicated like x = 4 >> 6
or x = 32 - y
where analysis of the code shows that y
will always contain 32
at this point, and so on.
For this reason, even though x ^= x
is known to be more efficient, the sole effect of it in the vast, vast majority of cases would be to make the code less readable (the only exception would be where doing x ^= y
was entailed in an algorithm being used and we happened to be doing a case where x
and y
were the same here, in this case x ^= x
would make the use of that algorithm clearer while x = 0
would hide it).
In 99.999999% percent of cases the same is going to apply to your example. In the remaining 0.000001% cases it should but there's an efficiency difference between some strange sort of operator overrides and the compiler can't resolve one to the other. Indeed, 0.000001% is overstating the case, and is just mentioned because I'm pretty sure that if I tried hard enough I could write something where one was less efficient than the other. Normally people aren't trying hard to do so.
If you ever look at your own code in reflector, you'll probably find a few cases where it looks very different to the code you wrote. The reason for this is that it is reverse-engineering the IL of your code, rather than your code itself, and indeed one thing you will often find is things like if(var == true)
or if(var != false)
being turned into if(var)
or even into if(!var)
with the if
and else
blocks reversed.
Look deeper and you'll see that even further changes are done in that there is more than one way to skin the same cat. In particular, looking at the way switch
statements gets converted to IL is interesting; sometimes it gets turned into the equivalent of a bunch of if-else if
statements, and sometimes it gets turned into a lookup into a table of jumps that could be made, depending on which seemed more efficient in the case in question.
Look deeper still and other changes are made when it gets compiled to native code.
I'm not going to agree with those who talk of "premature optimisation" just because you ask about the performance difference between two different approaches, because knowledge of such differences is a good thing, it's only using that knowledge prematurely that is premature (by definition). But a change that is going to be compiled away is neither premature, nor an optimisation, its just a null change.