tags:

views:

162

answers:

7

Greetings everybody. I have seen examples of such operations for so many times that I begin to think that I am getting something wrong with binary arithmetic. Is there any sense to perform the following:

byte value = someAnotherByteValue & 0xFF;

I don't really understand this, because it does not change anything anyway. Thanks for help.

P.S. I was trying to search for information both elsewhere and here, but unsuccessfully.

EDIT: Well, off course i assume that someAnotherByteValue is 8 bits long, the problem is that i don't get why so many people ( i mean professionals ) use such things in their code. For example in SharpZlib there is:

     buffer_ |= (uint)((window_[windowStart_++] & 0xff |
     (window_[windowStart_++] & 0xff) << 8) << bitsInBuffer_);

where window_ is a byte buffer.

+2  A: 

Nope.. There is no use in doing this. Should you be using a value that is having its importance more than 8 bits, then the above statement has some meaning. Otherwise, its the same as the input.

Bragboy
Yeah, i understand it for values greater than 0xFF =)
n535
Can you give me some links/source where this particular style is used.. Just curious
Bragboy
Added an example
n535
+1  A: 

If sizeof(someAnotherByteValue) is more than 8 bits and you want to extract the least signficant 8 bits from someAnotherByteValue then it makes sense. Otherwise, there is no use.

Naveen
Please give an example where it makes a difference. In every language where the line actually compiles, the compiler assigns the low order 8 bits to `value` automatically. It might be justified as more self-documenting however.
GregS
@GregS : if `someAnotherByteValue` is 16 bit for example and has a value greater that 255 then of course it makes a difference isn't it?
Naveen
@Naveen: I don't know of any language where it does make a difference, but then again I'm only familiar with a handful of languages.
GregS
A: 

No, there is no point so long as you are dealing with a byte. If value was a long then the lower 8 bits would be the lower 8 bits of someAnotherByteValue and the rest would be zero.

In a language like C++ where operators can be overloaded, it's possible but unlikely that the & operator has been overloaded. That would be pretty unusual and bad practice though.

Robin Welch
+3  A: 
uint s1 = (uint)(initial & 0xffff);

There is a point to this because uint is 32 bits, while 0xffff is 16 bits. The line selects the 16 least significant bits from initial.

Dean Harding
Yep, my inattention. I'll change an example. +1
n535
+3  A: 

The most likely reason is to make the code more self-documenting. In your particular example, it is not the size of someAnotherByteValue that matters, but rather the fact that value is a byte. This makes the & redundant in every language I am aware of. But, to give an example of where it would be needed, if this were Java and someAnotherByteValue was a byte, then the line int value = someAnotherByteValue; could give a completely different result than int value = someAnotherByteValue & 0xff. This is because Java's long, int, short, and byte types are signed, and the rules for conversion and sign extension have to be accounted for.

If you always use the idiom value = someAnotherByteValue & 0xFF then, no matter what the types of the variable are, you know that value is receiving the low 8 bits of someAnotherByteValue.

GregS
That actually makes perfect sense. Thank you.
n535
A: 

EDIT: Well, off course i assume that someAnotherByteValue is 8 bits long, the problem is that i don't get why so many people ( i mean professionals ) use such things in their code. For example in Jon Skeet's MiscUtil there is:

uint s1 = (uint)(initial & 0xffff);

where initial is int.

In this particular case, the author might be trying to convert an int to a uint. The & with 0xffff would ensure that it would still convert Lowest 2 Bytes, even if the system is not one which has a 2 byte int type.

Prashant
A: 

To be picky, there is no guaranty regarding a machine's byte size. There is no reason to assume in a extremely portable program that the architecture byte is 8 bits wide. To the best of my memory, according to the C standard (for example), a char is one byte, short is wider or the same as char, int is wider or the same as short, long is wider or the same as int and so on. Hence, theoretically there can be a compiler where a long is actually one byte wide, and that byte will be, say, 10 bits wide. Now, to ensure your program behaves the same on that machine, you need to use that (seemingly redundant) coding style.

"Byte" @ Wikipedia gives examples for such peculiar architectures.

ysap