views:

117

answers:

4

Imagine I had a variable called X. Let's say every 5 seconds I wanted to make X = true. (it could be either true or false in between these 5 seconds, but gets reset to true when the 5 seconds are up).

Would it be more efficient to check if the value is already true, then if not, reassign it to true? Or just have X = true?

in other words, which would run faster?

if(x==false){
    x = true;
}

vs

x = true;

On one hand, the first program won't mutate the variable if it doesn't have to. On the other hand, the second program doesn't need to check what X is equal to; it dives straight in.

+6  A: 
  • It nearly always doesn't matter. Write the code that is easiest to understand and maintain. Only optimize it if necessary.
  • The best way to be sure is to test it. Profile your code.
  • Which is faster might depend on the browser.
  • Which is faster depends on whether the variable is usually true or usually false.
  • Having said that, I'd guess in most scenarios setting a variable without testing it will be faster.
Mark Byers
Agreed. I'd put my money on just changing it 100% rather than testing 100% and changing it 50%.
Mike M.
@Mike M. and the 50% is an assumption that might not be true - just added that. And I can see cwap has also made this point too.
Mark Byers
If 'x' is not just a simple variable, but for example a "class" or a large array, it would be better to check and then set.
Aaron Harun
+3  A: 

Really depends on your data :)

If x == false 90% of the time, then a straight assignment to x would be faster.

This is one of those places where you probably don't want to worry about efficiency, and if you really do, profile it ..

cwap
+1 for thinking about the frequency of false vs true.
Mark Byers
+1  A: 

The efficiency your are trying to attain by this is minute compared the efficiency attained by the quality of your overall design.

Babiker
+1  A: 

Disclaimer/Warning:

This is a micro-optimization, and will never affect the efficiency of your program in a way that is measurable by users. If you turn off all compiler optimizations, and run an excellent profiler, you may be able to quantify the effects - but no user will ever notice.

This is especially true for your situation, where the code in question is only run every few seconds. The time spent profiling would probably be better spent improving other parts of your application.

Also, in these situations readability should always prevail over non-bottleneck micro-optimizations (although my answer below takes only runtime efficiency into account, as requested). Therefore my recommended code for you to use in this situation is x=true, since it's the easiest to read and understand.

Finally, if adding the check will improve speed, the compiler probably already knows that and will do it for you, so you can't go wrong with x=true (that's why you should turn off optimizations before running the profiler).


Answer:

The only true way to figure this out is by profiling. You may find that the 0 test (x==false) basically takes no time at all, and therefore it is worth including due to the time it saves when x turns out to be true. Or you may find that the test takes long enough that it wastes too much time when x turns out to be false.

My guess is that the test is unecessary. That's because 0-testing and other bitwise operations (and, or, etc) are all so fast that I usually treat them as taking the same elementary amount of time. And if 0-testing takes the same amount of time as an OR operation (setting to true), then the 0-test is a redundant waste of time. Profiling could prove me wrong of course, and my guess is based on loose assumptions about bitwise operations, so if you choose to run a profiler and figure this out I'd definitely be interested in the results.

Cam