I have an enum declaration like this:
public enum Filter
{
a = 0x0001;
b = 0x0002;
}
What does that mean? They are using this to filter an array.
I have an enum declaration like this:
public enum Filter
{
a = 0x0001;
b = 0x0002;
}
What does that mean? They are using this to filter an array.
It means they're the integer values assigned to those names. Enums are basically just named numbers. You can cast between the underlying type of an enum and the enum value.
For example:
public enum Colour
{
Red = 1,
Blue = 2,
Green = 3
}
Colour green = (Colour) 3;
int three = (int) Colour.Green;
By default an enum's underlying type is int
, but you can use any of byte
, sbyte
, short
, ushort
, int
, uint
, long
or ulong
:
public enum BigEnum : long
{
BigValue = 0x5000000000 // Couldn't fit this in an int
}
It just means that if you do Filter->a, you get 1. Filter->b is 2.
The weird hex notation is just that, notation.
EDIT: Since this is a 'filter' the hex notation makes a little more sense.
By writing 0x1, you specify the following bit pattern:
0000 0001
And 0x2 is:
0000 0010
This makes it clearer on how to use a filter.
So for example, if you wanted to filter out data that has the lower 2 bits set, you could do:
Filter->a | Filter->b
which would correspond to:
0000 0011
The hex notation makes the concept of a filter clearer (for some people). For example, it's relatively easy to figure out the binary of 0x83F0 by looking at it, but much more difficult for 33776 (the same number in base 10).
Those look like they are bit masks of some sort. But their actual values are 1 and 2...
You can assign values to enums such as:
enum Example {
a = 10,
b = 23,
c = 0x00FF
}
etc...
It could mean anything. We need to see more code then that to be able to understand what it's doing.
0x001
is the number 1
. Anytime you see the 0x
it means the programmer has entered the number in hexadecimal.
Using Hexidecimal notation like that usually indicates that there may be some bit manipulation. I've used this notation often when dealing with this very thing, for the very reason you asked this question - this notation sort of pops out at you and says "Pay attention to me I'm important!"
Well we can use integers infact we can avoid any as the default nature of enum assigns 0 to its first member and an incremented value to the next available member. Many developers use this to hit two targets with one bow.
my view is if we are still using why we are in fourth generation language just move to binary again
but its quite better technique to play with bits and encryption/decryption process
It's not clear what it is that you find unclear, so let's discuss it all:
The enum values have been given explicit numerical values. Each enum value is always represented as a numerical value for the underlying storage, but if you want to be sure what that numerical value is you have to specify it.
The numbers are written in hexadecimal notation, this is often used when you want the numerical values to contain a single set bit for masking. It's easier to see that the value has only one bit set when it's written as 0x8000 than when it's written as 32768.
In your example it's not as obvious as you have only two values, but for bit filtering each value represents a single bit so that each value is twice as large as the previous:
public enum Filter {
First = 0x0001,
Second = 0x0002,
Third = 0x0004,
Fourth = 0x0008
}
You can use such an enum to filter out single bits in a value:
If ((num & Filter.First) != 0 && (num & Filter.Third) != 0) {
Console.WriteLine("First and third bits are set.");
}