I have a class of data with a very large number of binary properties--151 (!) to be exact--and I am concerned with how to model this data structurally. Despite the internal efficiencies of storing bit-fields as bytes, my programming spidey senses are tingling at creating a table with 151 bit-fields (in addition to other properties).
There will not be a large number of rows--perhaps 1000 and once sent into production will not change very often.
I've thought of categorizing my data into disjoint subclasses and creating separate tables but splitting the properties in this manner is impracticable and even if possible certainly would not map effectively with the data subclasses. The other issue is I'd like to keep all the data together and avoid field and/or row duplication. I have also considered using some custom binary format but this is not workable as the key field in my data is used as foreign keys in other tables.
Queries will make heavy use of WHERE clauses to extract relevant data. I've considered using multiple longs or int fields but I've rejected this as unworkable since I know of no bit-wise and-operators or functions in SQL and as noted above, classification of the properties is problematic, not to mention other major software engineering issues (with this method).
I will be using PostgreSQL.
So, my question here is do I just make a table with a huge number of fields or are there other methods compatible with the relational model?