Suppose I have a set of objects, S
. There is an algorithm f
that, given a set S
builds certain data structure D
on it: f(S) = D
. If S
is large and/or contains vastly different objects, D
becomes large, to the point of being unusable (i.e. not fitting in allotted memory). To overcome this, I split S
into several non-intersecting subsets: S = S1 + S2 + ... + Sn
and build Di
for each subset. Using n
structures is less efficient than using one, but at least this way I can fit into memory constraints. Since size of f(S)
grows faster than S
itself, combined size of Di
is much less than size of D
.
However, it is still desirable to reduce n
, i.e. the number of subsets; or reduce the combined size of Di
. For this, I need to split S
in such a way that each Si
contains "similar" objects, because then f
will produce a smaller output structure if input objects are "similar enough" to each other.
The problems is that while "similarity" of objects in S
and size of f(S)
do correlate, there is no way to compute the latter other than just evaluating f(S)
, and f
is not quite fast.
Algorithm I have currently is to iteratively add each next object from S
into one of Si
, so that this results in the least possible (at this stage) increase in combined Di
size:
for x in S:
i = such i that
size(f(Si + {x})) - size(f(Si))
is min
Si = Si + {x}
This gives practically useful results, but certainly pretty far from optimum (i.e. the minimal possible combined size). Also, this is slow. To speed up somewhat, I compute size(f(Si + {x})) - size(f(Si))
only for those i
where x
is "similar enough" to objects already in Si
.
Is there any standard approach to such kinds of problems?
I know of branch and bounds algorithm family, but it cannot be applied here because it would be prohibitively slow. My guess is that it is simply not possible to compute optimal distribution of S
into Si
in reasonable time. But is there some common iteratively improving algorithm?
EDIT:
As comments noted, I never defined "similarity". In fact, all I want is to split in such subsets Si
that combined size of Di = f(Si)
is minimal or at least small enough. "Similarity" is defined only as this and unfortunately simply cannot be computed easily. I do have a simple approximation, but it is only that — an approximation.
So, what I need is a (likely heuristical) algorithm that minimizes sum f(Si)
given that there is no simple way to compute the latter — only approximations I use to throw away cases that are very unlikely to give good results.