views:

172

answers:

6

I have a set of bit flags that are used in a program I am porting from C to C++.

To begin...

The flags in my program were previously defined as:

/* Define feature flags for this DCD file */
#define DCD_IS_CHARMM       0x01
#define DCD_HAS_4DIMS       0x02
#define DCD_HAS_EXTRA_BLOCK 0x04

...Now I've gather that #defines for constants (versus class constants, etc.) are generally considered bad form.

This raises questions about how best to store bit flags in c++ and why c++ doesn't support assignment of binary text to an int, like it allows for hex numbers to be assigned in this way (via "0x"). These questions are summarized at the end of this post.

I could see one simple solution is to simply create individual constants:

namespace DCD {
   const unsigned int IS_CHARMM = 1;
   const unsigned int HAS_4DIMS = 2;
   const unsigned int HAS_EXTRA_BLOCK = 4;
};

Let's call this idea 1.

Another idea I had was to use an integer enum:

namespace DCD {
   enum e_Feature_Flags {
      IS_CHARMM = 1,
      HAS_4DIMS = 2,
      HAS_EXTRA_BLOCK = 8
   };
};

But one thing that bothers me about this is that its less intuitive when it comes to higher values, it seems... i.e.

namespace DCD {
   enum e_Feature_Flags {
      IS_CHARMM = 1,
      HAS_4DIMS = 2,
      HAS_EXTRA_BLOCK = 8,
      NEW_FLAG = 16,
      NEW_FLAG_2 = 32,
      NEW_FLAG_3 = 64,
      NEW_FLAG_4 = 128
   };
};

Let's call this approach option 2.

I'm considering using Tom Torf's macro solution:

#define B8(x) ((int) B8_(0x##x))

#define B8_(x) \
( ((x) & 0xF0000000) >( 28 - 7 ) \
| ((x) & 0x0F000000) >( 24 - 6 ) \
| ((x) & 0x00F00000) >( 20 - 5 ) \
| ((x) & 0x000F0000) >( 16 - 4 ) \
| ((x) & 0x0000F000) >( 12 - 3 ) \
| ((x) & 0x00000F00) >( 8 - 2 ) \
| ((x) & 0x000000F0) >( 4 - 1 ) \
| ((x) & 0x0000000F) >( 0 - 0 ) )

converted to inline functions, e.g.

#include <iostream>
#include <string>
....

/* TAKEN FROM THE C++ LITE FAQ [39.2]... */
class BadConversion : public std::runtime_error {
 public:
   BadConversion(std::string const& s)
     : std::runtime_error(s)
     { }
};

inline double convertToUI(std::string const& s)
{
   std::istringstream i(s);
   unsigned int x;
   if (!(i >> x))
     throw BadConversion("convertToUI(\"" + s + "\")");
   return x;
} 
/** END CODE **/

inline unsigned int B8(std::string x) {
   unsigned int my_val = convertToUI(x.insert(0,"0x").c_str());
   return ((my_val) & 0xF0000000) >( 28 - 7 ) |
            ((my_val) & 0x0F000000) >( 24 - 6 ) | 
            ((my_val) & 0x00F00000) >( 20 - 5 ) |
            ((my_val) & 0x000F0000) >( 16 - 4 ) |
            ((my_val) & 0x0000F000) >( 12 - 3 ) |
            ((my_val) & 0x00000F00) >( 8 - 2 ) |
            ((my_val) & 0x000000F0) >( 4 - 1 ) |
            ((my_val) & 0x0000000F) >( 0 - 0 );
}

namespace DCD {
   enum e_Feature_Flags {
      IS_CHARMM       = B8("00000001"),
      HAS_4DIMS       = B8("00000010"),
      HAS_EXTRA_BLOCK = B8("00000100"),
      NEW_FLAG        = B8("00001000"),
      NEW_FLAG_2      = B8("00010000"),
      NEW_FLAG_3      = B8("00100000"),
      NEW_FLAG_4      = B8("01000000")
   };
};

Is this crazy? Or does it seem more intuitive? Let's call this choice 3.

So to recap, my over-arching questions are:

1. Why doesn't c++ support a "0b" value flag, similar to "0x"?
2. Which is the best style to define flags...
i. Namespace wrapped constants.
ii. Namespace wrapped enum of unsigned ints assigned directly.
iii. Namespace wrapped enum of unsigned ints assigned using readable binary string.

Thanks in advance! And please don't close this thread as subjective, because I really want to get help on what the best style is and why c++ lacks built in binary assignment capability.


EDIT 1

A bit of additional info. I will be reading a 32-bit bitfield from a file and then testing it with these flags. So bear that in mind when you post suggestions.

A: 

I guess the bottom line is that it's not really necessary.

If you are just wanting to use binary for flags, the below approach is how I typically do it. After the orginal define you never have to worry about looking at "messier" bigger multiples of 2 as you mentioned

int FLAG_1 = 1
int FLAG_2 = 2
int FLAG_3 = 4
...
int FLAG_N = 256

you can easily check them with

if(someVariable & FLAG_3 == FLAG_3) {
   // the flag is set
}

And btw, Depending on your compiler (i'm using GNU GCC Compiler) it may support "0b"

note Edited to answer the question.

KennyCason
However, this is a compiler-specific, nonportable extension. It is not part of C++.
James McNellis
Using g++ I get the error `invalid suffix "b0101" on integer constant` ... so obviously the use of "0b" in assignment is NOT cross-compiler safe and isn't officially supported/an acceptable solution.
Jason R. Mick
Interesting, before I thought it was in the standard and that compilers had chose to ignore it. Wasn't aware that it was the other way around.
KennyCason
Oh and since you're using g++ as well, my version is `g++ (GCC) 4.1.2 20080704 (Red Hat 4.1.2-44)` , and I'm compiling using simply `g++ main.cc`
Jason R. Mick
gcc decided to support it at some point, although my v4.2 doesn't like it either. http://gcc.gnu.org/onlinedocs/gcc/Binary-constants.html#Binary-constants
sharth
+9  A: 

Binary literals are one of those things that's been discussed off and on over the years, but as far as I know, nobody's ever written up a serious proposal to get it into the standard, so it's never really gotten past the stage of talking about it.

In C++0x, instead of adding them directly, they've added a much more general mechanism to allow general user-defined literals, which you could use to support binary, or base 64, or other kinds of things entirely. The basic idea is that you'll specify a number (or string) literal followed by a suffix, and you can define a function that will receive that literal, and convert it to whatever form you prefer (and you'll be able to maintain its status as a "constant" too...)

As to which to use: I'd generally prefer a variation of option 2: an enum with initializers in hex:

namespace DCD {
   enum e_Feature_Flags {
      IS_CHARMM = 0x1,
      HAS_4DIMS = 0x2,
      HAS_EXTRA_BLOCK = 0x8,
      NEW_FLAG = 0x10,
      NEW_FLAG_2 = 0x20,
      NEW_FLAG_3 = 0x40,
      NEW_FLAG_4 = 0x80
   };
};

Another possibility is something like:

#define bit(n) (1<<(n))

enum e_feature_flags = { 
    IS_CHARM = bit(0), 
    HAS_4DIMS = bit(1),
    HAS_EXTRA_BLOCK = bit(3),
    NEW_FLAG = bit(4),
    NEW_FLAG_2 = bit(5),
    NEW_FLAG_3 = bit(6),
    NEW_FLAG_4 = bit(7)
};
Jerry Coffin
You mean `inline unsigned int bit(n) {1<<(n);}` right? ;)
Jason R. Mick
@Jason: No; the enumerator initializers have to be constant expressions; you can't call a function in a constant expression, so it has to be a macro.
James McNellis
hrmm... I guess that is an okay time to use a macro?
Jason R. Mick
@Jason: If you're masochistic you can use a template. You can always `#undef` the macro when you're finished with it. Personally I just write out the mask. The hex matches what you see in the debugger, and you won't be changing it very often anyway. Either it corresponds to a file format, in which case you'll essentially never change it, or it doesn't, in which case you might as well use bitfields and not bother with any masks.
Potatoswatter
@Jason: Just one minor addition to what @James McNellis already pointed out: in C++0x, you should be able to use a `constexpr` function.
Jerry Coffin
+6  A: 

With option two, you can use the left shift, which is perhaps a bit less "unintuitive:"

namespace DCD { 
   enum e_Feature_Flags { 
      IS_CHARMM =        1, 
      HAS_4DIMS =       (1 << 1), 
      HAS_EXTRA_BLOCK = (1 << 2), 
      NEW_FLAG =        (1 << 3), 
      NEW_FLAG_2 =      (1 << 4), 
      NEW_FLAG_3 =      (1 << 5), 
      NEW_FLAG_4 =      (1 << 6) 
   }; 
};
James McNellis
This is the way I do it all the time in embedded C ...
penguinpower
Jerry's approach using the `bit(n)` is a bit more intuitive reading the code visually, but yours doesn't use macros. Gah I can only pick one answer... both of you offered good suggestions. Any insight into why there's never been a proposal to standardize "0b" or some similar form of assignment?
Jason R. Mick
+3  A: 

Just as a note, Boost (as usual) provides an implementation of this idea.

sharth
While that is indeed tempting, Boost isn't official c++ and I feel like dragging it into my code project would introduce possible dependency/platform restrictions that I don't want to deal with as a time-limited programmer who is self-educated with c++. Further, my coworkers are even more novice to the world of programming than I am (I was a CE, so at least had classes in Java and prog. fundamentals w/ visual basic) so I have little confidence they would be able to maintain my boost-driven code....
Jason R. Mick
Don't be afraid of libraries. Especially Boost, which is *very* cross-platform.
Steve M
+2  A: 

Why not use a bitfield struct?

struct preferences {
        unsigned int likes_ice_cream : 1;
        unsigned int plays_golf : 1;
        unsigned int watches_tv : 1;
        unsigned int reads_stackoverflow : 1;
    }; 

struct preferences fred;

fred.likes_ice_cream = 1;
fred.plays_golf = 0;
fred.watches_tv = 0;
fred.reads_stackoverflow = 1;

if (fred.likes_ice_cream == 1)
    /* ... */
Dave Mooney
Jason R. Mick
One could also say you are reading a 32-bit bitfield from a file.
Dave Mooney
@Fred - fixed..
Dave Mooney
Very good! And yes, I do like ice cream!
Fred Larson
A: 

What's wrong with hex for this use case?

enum Flags {
    FLAG_A = 0x00000001,
    FLAG_B = 0x00000002,
    FLAG_C = 0x00000004,
    FLAG_D = 0x00000008,
    FLAG_E = 0x00000010,
    // ...
};
Zack