hi
im facing the challange of optimizing a database design by an automated mechanism. the database will include tables which relates arbitrary data to an attribute e.g. Table A contains all id's of persons who clicked the buy button, Table B contains all id's who have no chargeback, ....
so given are some very basic MainTables e.g. A,B,C,D which shall be composed by set theory (union, intersect, complement) operations using some kind of GUI, resulting in new SubsetTables
as more subset's are created, they more and more depend on each other, e.g. E=(A and B), F=(A and B and C) - both are boolean function, and id of a person is within those set's or not
now i try to optimize the database query which fills/queries my table's so that the query is not F=(A join B join C) but F=(E join C), which makes sense cause E contains fewer rows as its already a subset
finally this means im asking which, if any, mathematical/software algorithm can optimize my problem with many set's and subset's (maybe a few hundred) so i can query and fill the tables efficiently
i already thought about creating new subset's only using the information of MainSet's and optimizing those GUI-created-Subset using Karnaugh-Veitch-Algorithm, but this would lead in e.g. A and B and C. I dont know how to take the second step of optimization to transform (A and B and C) to (E and C) and with even more complexity resulting in the problem of finding most efficient combination of (sub)set's
thx for any advice