I'm writing a programming language, and when I came across this question, my immediate thought was that languages should optimize booleans into bit flags for the programmer. This would keep all the benefits of speed and efficient memory usage while removing the burden of maintenance and the possibilities for errors caused by the more complicated bit manipulation.
The one case where you might want to not have this optimization is if you have a situation where there is only one set of booleans being stored. Basically, if you have 8 bits for flags + 8 bit masks * 8 bits per bit mask = 72 bits
instead of 8 booleans * 8 bits per boolean = 64 bits
. But as soon as you have even two copies of the booleans, it becomes 2 copies * 8 bits for flags + 8 bit masks * 8 bits per bit mask = 80 bits
versus 2 copies * 8 booleans * 8 bits per boolean = 128 bits
. It seems like the few cases where the booleans would be more storage-optimal would be easy to detect so one could not apply the optimization.
Is there any reason that languages don't support this optimization? I looked around and it doesn't seem that any languages do (I may just not be looking in the right places).