views:

1314

answers:

10

Pretty simple question. I know it would probably be a tiny optimization, but eventually you'll use enough if statements for it to matter.

EDIT: Thank you to those of you who have provided answers.

To those of you who feel a need to bash me, know that curiosity and a thirst for knowledge do not translate to stupidity.

And many thanks to all of those who provided constructive criticism. I had no knowledge of the ability to state if(var) until now. I'm pretty sure I'll be using it now. ;)

+11  A: 

Makes no measurable difference at all, no matter how many iterations you use in your program.

(Use if (var) instead; you don't need the visual clutter of the comparisons.)

Michael Petrotta
if (var) would be the same operations as if (var == true) or (var != false). You may prefer it from a style point of view however.
Graphain
@Graphain, I for one would almost certainly prefer `if(var)` assuming that `var` was sensibly named to indicate immediately that it was a boolean, and what it mean. Indeed if `if(var == true)` was more readable, I'd say there's a problem with the name of the bool.
Jon Hanna
Agreed, but just wanted others to be aware there is no performance difference as this is a performance question.
Graphain
+2  A: 

It makes no difference, and the compiler is free to interchange them at will.

For example, you could write

if (!predicate)
    statement1;
else
    statement2;

and the compiler is free to emit code equivalent to

if (predicate)
    statement2;
else
    statement1;

or vice-versa.

Gabe
+11  A: 

It will make absolutely zero difference, because the compiler would almost certainly compile the two statements to the same binary code.

The (pseudo) assembly will either be:

test reg1, reg2
br.true somewhere
; code for false case

somewhere:
; code for true case

or

test reg1, reg2
br.false somewhere
; code for true case

somewhere:
; code for false case

Which of those the compiler chooses will not depend on whether you write == true or != false. Rather, it's an optimisation the compiler will make based on the size of the true and false case code and perhaps some other factors.

As an aside, the Linux kernel code actually does try to optimise these branches using LIKELY and UNLIKELY macros for its if conditions, so I guess it is possible to manually control it.

Evgeny
+1 Good point about what triggers optimization.
Brian Rasmussen
+47  A: 

First off: the only way to answer performance question is to measure it. Try it yourself and you'll find out.

As for what the compiler does: I remind you that "if" is just a conditional goto. When you have

if (x)
   Y();
else
   Z();
Q();

the compiler generates that as either:

evaluate x
branch to LABEL1 if result was false
call Y
branch to LABEL2
LABEL1:
call Z
LABEL2:
call Q

or

evaluate !x
branch to LABEL1 if result was true

depending on whether it is easier to generate the code to elicit the "normal" or "inverted" result for whatever "x" happens to be. For example, if you have if (a<=b) it might be easier to generate it as (if !(a>b)). Or vice versa; it depends on the details of the exact code being compiled.

Regardless, I suspect you have bigger fish to fry. If you care about performance, use a profiler and find the slowest thing and then fix that. It makes no sense whatsoever to be worried about nanosecond optimizations when you probably are wasting entire milliseconds somewhere else in your program.

Eric Lippert
"Use a profiler" - so true. I can't count how many times I've "known" where the performance problem is, just to be shown wrong by a profile of a run.
codekaizen
I not only suspect, I'd say it's approaching a mathematical certainty that you have bigger fish to fry.
Epaga
+14  A: 

It will make no difference at all. Using reflector you can see that the code:

private static void testVar(bool var)
{
    if (var == true)
    {
        Console.WriteLine("test");
    }

    if (var != false)
    {
        Console.WriteLine("test");
    }

    if (var)
    {
        Console.WriteLine("test");
    }
}

creates the IL:

.method private hidebysig static void testVar(bool var) cil managed
{
  .maxstack 8
  L_0000: ldarg.0 
  L_0001: brfalse.s L_000d
  L_0003: ldstr "test"
  L_0008: call void [mscorlib]System.Console::WriteLine(string)
  L_000d: ldarg.0 
  L_000e: brfalse.s L_001a
  L_0010: ldstr "test"
  L_0015: call void [mscorlib]System.Console::WriteLine(string)
  L_001a: ldarg.0 
  L_001b: brfalse.s L_0027
  L_001d: ldstr "test"
  L_0022: call void [mscorlib]System.Console::WriteLine(string)
  L_0027: ret 
}

So the compiler (in .Net 3.5) translates them all to the ldarg.0, brfalse.s instruction set.

jmservera
+31  A: 

Did you know that on x86 processors it's more efficient to do x ^= x where x is a 32-bit integer, than it is to do x = 0? It's true, and of course has the same result. Hence any time one can see x = 0 in code, one can replace it with x ^= x and gain efficiency.

Now, have you ever seen x ^= x in much code?

The reason you haven't is not just because the efficiency gain is slight, but because this is precisely the sort of change that a compiler (if compiling to native code) or jitter (if compiling IL or similar) will make. Disassemble some x86 code and its not unusual to see the assembly equivalent of x ^= x, though the code that was compiled to do this almost certainly had x = 0 or perhaps something much more complicated like x = 4 >> 6 or x = 32 - y where analysis of the code shows that y will always contain 32 at this point, and so on.

For this reason, even though x ^= x is known to be more efficient, the sole effect of it in the vast, vast majority of cases would be to make the code less readable (the only exception would be where doing x ^= y was entailed in an algorithm being used and we happened to be doing a case where x and y were the same here, in this case x ^= x would make the use of that algorithm clearer while x = 0 would hide it).

In 99.999999% percent of cases the same is going to apply to your example. In the remaining 0.000001% cases it should but there's an efficiency difference between some strange sort of operator overrides and the compiler can't resolve one to the other. Indeed, 0.000001% is overstating the case, and is just mentioned because I'm pretty sure that if I tried hard enough I could write something where one was less efficient than the other. Normally people aren't trying hard to do so.

If you ever look at your own code in reflector, you'll probably find a few cases where it looks very different to the code you wrote. The reason for this is that it is reverse-engineering the IL of your code, rather than your code itself, and indeed one thing you will often find is things like if(var == true) or if(var != false) being turned into if(var) or even into if(!var) with the if and else blocks reversed.

Look deeper and you'll see that even further changes are done in that there is more than one way to skin the same cat. In particular, looking at the way switch statements gets converted to IL is interesting; sometimes it gets turned into the equivalent of a bunch of if-else if statements, and sometimes it gets turned into a lookup into a table of jumps that could be made, depending on which seemed more efficient in the case in question.

Look deeper still and other changes are made when it gets compiled to native code.

I'm not going to agree with those who talk of "premature optimisation" just because you ask about the performance difference between two different approaches, because knowledge of such differences is a good thing, it's only using that knowledge prematurely that is premature (by definition). But a change that is going to be compiled away is neither premature, nor an optimisation, its just a null change.

Jon Hanna
"Knowledge of such differences is a good thing" : This presents a good argument for running using the IL Disassembler on your code to actually check if there is a difference. Which, incidentally, can be more accurate than merely using a profiler. Equal IL you can put 100% trust in being equal in execution time. Equal profiling you can't give you perfect trust, since your profiling data may have noise and your test machine may have different characteristics than other machines that will run the application. It also usually takes longer.
Brian
Yes. Also, knowing that this makes no difference, or that X is more efficient than Y in case A but less so in case B lets one make sense of what the profiler tells you when it's time to worry about such things.
Jon Hanna
+1  A: 

Knowing which of these two specific cases is faster is a level of detail that is seldom (if ever) required in a high-level language. Perhaps you might require to know it if your compiler is piss poor at optimizations. However, if your compiler is that bad, you would probably be better off overall in getting a better one if possible.

If you are programming in assembly, it is more likely that you knowledge of the two cases would be better. Others have already given the assembly breakdown with respect to branch statements, so I will not bother to duplicate that part of the response. However, one item that has been omitted in my opinion is that of the comparison.

It is conceivable that a processor may change the status flags upon loading 'var'. If so, then if 'var' is 0, then the zero-flag may be set when the variable is loaded into a register. With such a processor, no explicit comparison against FALSE would be required. The equivalent assembly pseudo-code would be ...

load 'var' into register
branch if zero or if not zero as appropriate

Using this same processor, if you were to test it against TRUE, the assembly pseudo-code would be ...

load 'var' into register
compare that register to TRUE (a specific value)
branch if equal or if not equal as appropriate

In practice, do any processors behave like this? I don't know--others will be more knowledgeable than I. I do know of some that don't behave in this fashion, but I do not know about all.

Assuming that some processors do behave as in the scenario described above, what can we learn? IF (and that is a big IF) you are going to worry about this, avoid testing booleans against explicit values ...

if (var == TRUE)
if (var != FALSE)

and use one of the following for testing boolean types ...

if (var)
if (!var)
Sparky
+7  A: 

A rule of thumb that usually works is "If you know they do the same thing, then the compiler knows too".

If the compiler knows that the two forms yield the same result, then it will pick the fastest one.

Hence, assume that they are equally fast, until your profiler tells you otherwise.

jalf
+4  A: 

The other answers are all good, I just wanted to add:

This is not a meaningful question, because it assumes a 1:1 relation between the notation and the resulting IL or native code.

There isn't. And that's true even in C++, and even in C. You have to go all the way down to native code to have such a question make sense.

Edited to add:

The developers of the first Fortran compiler (ca. 1957) were surprised one day when reviewing its output. It was emitting code that was not obviously correct (though it was); in essence, it was making optimization decisions that were not obviously correct (though they were).

The moral of this story: compilers have been smarter than people for over 50 years. Don't try to outsmart them unless you're prepared to examine their output and/or do extensive performance testing.

egrunin
+4  A: 

Always optimize for ease of understanding. This is a cardinal rule of programming, as far as I am concerned. You should not micro-optimize, or even optimize at all until you know that you need to do so, and where you need to do so. It's a very rare case that squeezing every ounce of performance out is more important than maintainability and it's even rarer that you're so awesome that you know where to optimize as you initially write the code.

Furthermore, things like this get automatically optimized out in any decent language.

tl;dr don't bother

Sorpigal
I am with your on the micro-optimiztion, but it should be noted that some of the most dramatic performance impacts come from the overall design or algorithms used in the application. These can be very expensive to fix if you get too far down the road.
Brian Gideon