I am curious to know why bit fields with same data type takes less size than with mixed data types.
struct xyz
{
int x : 1;
int y : 1;
int z : 1;
};
struct abc
{
char x : 1;
int y : 1;
bool z : 1;
};
sizeof(xyz) = 4 sizeof(abc) = 12.
I am using VS 2005, 64bit x86 machine.
A bit machine/compiler level answer would be great.