You can do it with hexadecimal flags or enums as you suggested, but the most readable/self-documenting is probably to use what are called "bitfields" (for details, Google for C++ bitfields
).
Use hex constants/enums and bitwise operations if you care about which particular bits mean what.
Otherwise, use C++ bitfields (but be aware that the ordering of bits in the integer will be compiler-dependent).
Yes a good way is to use hex decimal to represent the bit patterns. Then you use the bitwise operators to manipulate your 16-bit ints.
For example:
if(x & 0x01){} // tests if bit 0 is set using bitwise AND
x ^= 0x02; // toggles bit 1 (0 based) using bitwise XOR
x |= 0x10; // sets bit 4 (0 based) using bitwise OR
Learn your bitwise opertors: &, |, ^, and !.
At the top of a lot of C/C++ files I have seen flags defined in hex to mask each bit.
#define ONE 0x0001
To see if a bit is turned on, you AND it with 1. To turn it on, you OR it with 1. To toggle like a switch, XOR it with 1.
If you want to use bitfields then this is an easy way:
typedef struct MAZENODE
{
bool backtrack_north:1;
bool backtrack_south:1;
bool backtrack_east:1;
bool backtrack_west:1;
bool solution_north:1;
bool solution_south:1;
bool solution_east:1;
bool solution_west:1;
bool maze_north:1;
bool maze_south:1;
bool maze_east:1;
bool maze_west:1;
bool walls_north:1;
bool walls_south:1;
bool walls_east:1;
bool walls_west:1;
};
Then your code can just test each one for true or false.
To manipulate sets of bits, you can also use ....
std::bitset<N>
std::bitset<4*4> bits;
bits[ 10 ] = false;
bits.set(10);
bits.flip();
assert( !bits.test(10) );
I'm not a huge fan of bitset. It's just more typing in my opinion. And it doesn't hide what you are doing anyway. You still have to & && | bits. Unless you are picking on just 1 bit. That may work for small groups of flags. Not that we need to hide what we are doing either. But the intention of a class is usually to make something easier for it's users. I don't think this class accomplishes it.
Say for instance, you have a flag system with .. 64 flags. If you want to test.. I don't know.. 39 of them in 1 if statement to see if they are all on... using bitfields is a huge pain. You have to type them all out.. Course. I'm making the assumption you use only bitfields functionality and not mix and match methods. Same thing with bitset. Unless I am missing something with the class.. which is quite possible since I rarely use it.. I don't see a way you can test all 39 flags unless you type out the hole thing or resort to "standard methods" (using enum flag lists or some defined value for 39 bits and using the bitsets && operator). This can start to get messy depending on your approach. And I know.. 64 flags sounds like a lot. And well. It is.. depending on what you are doing. Personally speaking, most of the projects I'm involved with depend on flag systems. So actually.. 64 is not that unheard of. Though 16~32 is far more common in my experience. I'm actually helping out in a project right now where one flag system has 640 bits. It's basically a privilege system. So it makes some sense to arrange them all together... However.. admittedly.. I would like to break that up a bit.. but.. eh... I'm helping.. not creating.