Is there any advantage of using int vs varbinary for storing bit masks in terms of performance or flexibility.
For my purposes, I will always be doing reads on these bit masks (no writes or updates).
Is there any advantage of using int vs varbinary for storing bit masks in terms of performance or flexibility.
For my purposes, I will always be doing reads on these bit masks (no writes or updates).
Well, considering an int has less storage space and is generally a little easier to work with I'm not sure why you'd use a varbinary.
You should definitely use an INT
(if you need 32 flags) or BIGINT
(for 64 flags). If you need more flags you could use VARBINARY (but you should probably also ask yourself why you need so many flags in your application).
Besides, if you use an integral type, you can use standard bitwise operators directly without converting a byte array to an integral type.
It is generally considered preferable to use a bunch of bit columns instead of a bit mask. They will get packed together in the page, so they won't take any more room. Although I too always seem to go with an int or bigint column to avoid all of the column name typing.. but with intellisense I would probably go with the bit columns.
I usually agree with @hainstech's answer of using bit fields, because you can explicitly name each bit field to indicate what it should store. However I haven't seen a practical approach to doing bitmask comparisons with bit fields. With SQL Server's bitwise operators (&, |, etc...) it's easy to find out if a range of flags are set. A lot more work to do that with equality operators against a large number of bit fields.