A really simple and old technique is to just define a number of #define variables whose values correspond to bit locations and then use AND and OR operations to clear or set them as appropriate.
e.g.
#define BIT_0 0x0001
#define BIT_1 0x0002
#define BIT_2 0x0004
#define BIT_3 0x0008
#define BIT_4 0x0010
You then use them to set bit locations in a standard variable e.g.
int someVariable = 0;
someVariable = someVariable | BIT_1; //set bit 1 to 1. someVariable = 2
someVariable = someVariable & ~BIT_1; // clear bit 1. someVariable = 0
Not efficient or clever but easy to read.
edit - added
If you wish to restrict which bits are valid for use then setup a mask value to apply as follows:
#define VALID_BIT_MASK 0x0009 // thus only bits 3 and 0 are valid
As an example
someVariable = someVariable | BIT_0 | BIT_2 | BIT_4; // someVariable now has value 21
someVariable = someVariable & VALID_BIT_MASK; // remove invalid bits, someVariable value is now 1
Obviously someVariable is going to be byte, unsigned int or unsigned long, but say that you want only 11 bits set of an unsigned int (16 bits).
#define VALID_BIT_MASK 0x07FF; // 011111111111 in binary.
someVariable = someVariable & VALID_BIT_MASK; //strips off unwanted bits.