tags:

views:

1211

answers:

8

I have an enum declaration like this:

public enum Filter
{
  a = 0x0001;
  b = 0x0002;
}

What does that mean? They are using this to filter an array.

+5  A: 

It means they're the integer values assigned to those names. Enums are basically just named numbers. You can cast between the underlying type of an enum and the enum value.

For example:

public enum Colour
{
    Red = 1,
    Blue = 2,
    Green = 3
}

Colour green = (Colour) 3;
int three = (int) Colour.Green;

By default an enum's underlying type is int, but you can use any of byte, sbyte, short, ushort, int, uint, long or ulong:

public enum BigEnum : long
{
    BigValue = 0x5000000000 // Couldn't fit this in an int
}
Jon Skeet
Why can't we use integers? What was the need to use hex notation?
Broken Link
You can use integers (as shown in my first example) - but you can also use hex. It's just a choice.
Jon Skeet
There probably wasn't a need to use Hex, they probably just used hex to be more evident to the programmer about what they are used for.
Polaris878
Does Filter allow it's values to be combined (e.g. myFilter = a | b)? If so you need values like 00000001b or 00000010b so that you can combine them. Using hex makes it easier to express these values (0x08, 0x10, 0x20 rather than 8,16,32). You can also combine 0x10 and 0x20 to 0x30 which is easier than without hex notation.
It's not evident. It's confusing I guess. At least to me.
Broken Link
@RJ... Yeah it can be confusing, but it is meant to show that the values are probably used for a specific purpose out of the ordinary.
Polaris878
If an enum is marked up with the [Flags] attribute then it's possible to do bitwise arithmetic on it. In that situation it can make sense to use hex notation. See http://msdn.microsoft.com/en-us/library/system.flagsattribute.aspx .
Jeremy McGee
@RJ: Usually hex is preferable to decimal when working with bit-based flags, since it is much easier to read flags in hex (i.e. if you are going to later perform bitwise operations, it is easier to read 0x10, 0x20, 0x40, 0x80 than it is to read 16, 32, 64, 128). Such flags are useful if you want to compactly represent an set of flags using only 1 bit per flag.
Brian
"you can use integers...but you can also use hex."You're implying that there's some sort of difference. Hex, octal, decimal, binary, hell base36, they're all just different representions of the same value, in this case an integer value. The fact that they're using hex notation coupled with the initial values that they've chosen (1, 2, i.e. 0001, 0010) suggests that they intend to use the enum as a bitfield. Why the need for the hex notation? Probably to better convey the intention of the construct.
RG
Further, a .Net int is actually an int32, right? A 32 bit signed int can hold between 2,147,483,648 and -2,147,483,647 (0x80000000 and 0x80000001) so your example value of 0x5000000 (1,342,177,280) should fit nicely, shouldn't it?
RG
Oops, yes - I meant to include more zeros.
Jon Skeet
@RG - You really had me confused for a while there... I thought your comment was from RJ and wondered what had happened!!
Greg Beech
+3  A: 

It just means that if you do Filter->a, you get 1. Filter->b is 2.

The weird hex notation is just that, notation.

EDIT: Since this is a 'filter' the hex notation makes a little more sense.

By writing 0x1, you specify the following bit pattern:

0000 0001

And 0x2 is:

0000 0010

This makes it clearer on how to use a filter.

So for example, if you wanted to filter out data that has the lower 2 bits set, you could do:

Filter->a | Filter->b

which would correspond to:

0000 0011

The hex notation makes the concept of a filter clearer (for some people). For example, it's relatively easy to figure out the binary of 0x83F0 by looking at it, but much more difficult for 33776 (the same number in base 10).

samoz
Why can't we use integers? What was the need to use hex notation?
Broken Link
@RJ - If you plan on being a developer, you'd better understand hex notation as easily as you understand decimal.
Greg Beech
I don't agree with you. There is absolutely no need you should understand hex notation. We are not living in world of 1 and 0's we are in fourth gen! And there is absolutely there is a diff way of doing things.
Broken Link
@RJ - LOL ... that's funny
Greg Beech
@RJ - Greg B is right. If you plan on being a developer, you'd better understand hex notation (and octal wouldn't hurt). Otherwise, you will never understand or be able to debug any code that executes a bitwise logical AND, OR, XOR operation. You're idea that this is unnecessary is roughly equivalent to demanding that those operators be removed from the C++ and C# language standards.
Die in Sente
Somehow this seems strangely appropriate: http://thedailywtf.com/forums/47608/ShowPost.aspx
Greg Beech
RJ: Lol, I hope you're kidding
Janie
A: 

Those look like they are bit masks of some sort. But their actual values are 1 and 2...

You can assign values to enums such as:

enum Example {
    a = 10,
    b = 23,
    c = 0x00FF
}

etc...

Polaris878
+2  A: 

Those are literal hexadecimal numbers.

JP Alioto
+2  A: 

It could mean anything. We need to see more code then that to be able to understand what it's doing.

0x001 is the number 1. Anytime you see the 0x it means the programmer has entered the number in hexadecimal.

Lucas McCoy
A: 

Using Hexidecimal notation like that usually indicates that there may be some bit manipulation. I've used this notation often when dealing with this very thing, for the very reason you asked this question - this notation sort of pops out at you and says "Pay attention to me I'm important!"

Brian
A: 

Well we can use integers infact we can avoid any as the default nature of enum assigns 0 to its first member and an incremented value to the next available member. Many developers use this to hit two targets with one bow.

  • Complicate the code making it difficult to understand
  • Faster the performance as hex codes are nearer to binary one

my view is if we are still using why we are in fourth generation language just move to binary again

but its quite better technique to play with bits and encryption/decryption process

Ankur
Exactly - my point.
Broken Link
"Faster the performance"? You've got to be kidding right? You know the IL is the same irrespective of the notation you use? And if you find hex hard to understand, you're really in the wrong profession. Just wait until you get some experience, and then come back and say that it's unnecessary to understand all the low level stuff. Ruby and Python et al aren't made out of magic fairy dust you know.
Greg Beech
+1  A: 

It's not clear what it is that you find unclear, so let's discuss it all:

The enum values have been given explicit numerical values. Each enum value is always represented as a numerical value for the underlying storage, but if you want to be sure what that numerical value is you have to specify it.

The numbers are written in hexadecimal notation, this is often used when you want the numerical values to contain a single set bit for masking. It's easier to see that the value has only one bit set when it's written as 0x8000 than when it's written as 32768.

In your example it's not as obvious as you have only two values, but for bit filtering each value represents a single bit so that each value is twice as large as the previous:

public enum Filter {
   First = 0x0001,
   Second = 0x0002,
   Third = 0x0004,
   Fourth = 0x0008
}

You can use such an enum to filter out single bits in a value:

If ((num & Filter.First) != 0 && (num & Filter.Third) != 0) {
   Console.WriteLine("First and third bits are set.");
}
Guffa