views:

40

answers:

2

One thing I've never really understood is why in many libraries, constants are defined like this:

public static final int DM_FILL_BACKGROUND = 0x2;
public static final int DM_FILL_PREVIOUS = 0x3;
public static final int TRANSPARENCY_MASK = 1 << 1;
public static final int TRANSPARENCY_PIXEL = 1 << 2;

What's up with the 0x and << stuff? Why aren't people just using ordinary integer values?

+6  A: 

The bit shifting of 1 is usually for situations where you have non-exclusive values that you want to store.

For example, say you want to be able to draw lines on any side of a box. You define:

LEFT_SIDE   = 1 << 0  # binary 0001 (1)
RIGHT_SIDE  = 1 << 1  # binary 0010 (2)
TOP_SIDE    = 1 << 2  # binary 0100 (4)
BOTTOM_SIDE = 1 << 3  # binary 1000 (8)
                               ----
                               0111 (7) = LEFT_SIDE | RIGHT_SIDE | TOP_SIDE

Then you can combine them for multiple sides:

DrawBox (LEFT_SIDE | RIGHT_SIDE | TOP_SIDE) # Don't draw line on bottom.

The fact that they're using totally different bits means that they're independent of each other. By ORing them you get 1 | 2 | 4 which is equal to 7 and you can detect each individual bit with other boolean operations (see here and here for an explanation of these).

If they were defined as 1, 2, 3 and 4 then you'd probably either have to make one call for each side or you'd have to pass four different parameters, one per side. Otherwise you couldn't tell the difference between LEFT and RIGHT (1 + 2 = 3) and TOP (3), since both of them would be the same value (with a simple addition operation).

The 0x stuff is just hexadecimal numbers which are easier to see as binary bitmasks (each hexadecimal digit corresponds exactly with four binary digits. You'll tend to see patterns like 0x01, 0x02, 0x04, 0x08, 0x10, 0x20 and so on, since they're the equivalent of a single 1 bit moving towards the most significant bit position - those values are equivalent to binary 00000001, 00000010, 00000100, 00001000, 00010000, 00100000 and so on.

Aside: Once you get used to hex, you rarely have to worry about the 1 << n stuff. You can instantly recognise 0x4000 as binary 0100 0000 0000 0000. That's less obvious if you see the value 16384 in the code although some of us even recognise that :-)

paxdiablo
Well said. I like the 1 << BitNo format for flags because you know which bit it is by looking at the definition. I have a generic snippet that is defined as <FlagsAttribute()> _ Public Enum _Flag As Integer with 32 predefined flags.
dbasnett
+2  A: 

Regarding << stuff: this in my preferred way.

When I need to define the constant with 1 in the bit 2 position, and 0 in all other bits, I can define it as 4, 0x4 or 1<<2. 1<<2 is more readable, to my opinion, and explains exactly the purpose of this constant.

BTW, all these ways give the same performance, since calculations are done at compile time.

Alex Farber
`1<<2` may well be more readable. It's certainly more _accurate_, given that the value is `4`, not `7` :-)
paxdiablo
I don't believe! Of course, 4. Never mix beer and programming!
Alex Farber