tags:

views:

287

answers:

6

I have been reading over some code lately and came across some lines such as:

somevar &= 0xFFFFFFFF;

What is the point of anding something that has all bits turned on; doesn't it just equal somevar in the end?

+14  A: 

"somevar" could be a 64-bit variable, this code would therefore extract the bottom 32 bits.

edit: if it's a 32-bit variable, I can think of other reasons but they are much more obscure:

  • the constant 0xFFFFFFFF was automatically generated code
  • someone is trying to trick the compiler into preventing something from being optimized
  • someone intentionally wants a no-op line to be able to set a breakpoint there during debugging.
Jason S
+3  A: 

I suppose it depends on the length of somevar. This would, of course, not be a no-op if somevar were a 64-bit int. Or, is somevar is some type with an overloaded operator&=.

rlbond
It wouldn't be a nop anyway (baring compiler optimisation) and while side effects are bad, they are used by some people.
Matthew Scharley
+5  A: 

Indeed, this wouldn't make sense if somevar is of type int (32-bit integer). If it is of type long (64-bit integer), however, then this would mask the upper (most significant) half of the value.

Note that a long value is not guaranteed to be 64 bits, but typically is on a 32-bit computer.

Noldorin
+1 for mentioning the var type
Dervin Thunk
You can't assume int is 32 bits nor long is 64 bits. (Perhaps you understand this and just lazy wording makes your answer imply these are correct?)
Roger Pate
@R. Pate: Yeah, this is true. I say this because of the current predominance of 32-bit systems however, in which this is the case, and moreover, since the code was most likely written for a 32-bit system.
Noldorin
It's a property of the compiler/implementation, not the computer running the program. Some compilers even let you change the size of data types within the range allowed.
Roger Pate
@R. Pate: As far as I understand, the compiler typically bases it on the processor architecture. Also, note that Java/C# specifically indicate than "int" is a 32-bit integer and "long" is a 64-bit integer. I guess in C/C++ it's more convention than anything.
Noldorin
It's not even convention. long is 32 bits, not 64, on both GCC and MSVC on 32 bit Intel systems.
Steve Jessop
So long is the same size as int? Huh? Maybe I'm just too accustomed to C#...
Noldorin
Correct, long is the same size as int in those implementations. This does, as you'd expect, occasionally cause C++ programmers to rend their garments and scream "Why is there no 64bit integer in the standard?". In practice, you can always write "nearly portable" code that uses stdint.h, on the assumption that if it isn't already available, it's trivial to implement for any given platform.
Steve Jessop
@onebyone: Thanks for clarifying that.
Noldorin
+1  A: 

sometimes the size of long is 32 bits, sometimes it's 64 bits, sometimes it's something else. Maybe the code is trying to compensate for that--and either just use the value (if it's 32 bits) or mask out the rest of the value and only use the lower 32 bits of it. Of course, this wouldn't really make sense, because if that were desired, it would have been easier to just use an int.

Carson Myers
+2  A: 

hi there,

yes definitely to truncate 32 bits on a 64 bit environment.

Tiberiu Hajas
+2  A: 

If the code fragment was C++, then the &= operator might have been overridden so as to have an effect not particularly related to bitwise AND. Granted, that would be a nasty, evil, dirty thing to do...

RBerteig