I'm going to recommend against this, now
See the discussion below.
Original thoughts
I would consider an iterative push-pull method.
- Guess where to put the center (simplest would be the mean position of all centers)
- Compute the vectors to the farthest point on each circle. These are always in the direction to the center of that circle and have length
distance_to_center_of_circle[i]+radius_of_circle[i]
and form the vector sum as you go. Also note that the necessary radius at the current location is the maximum of these lengths.
- Propose a step of (say) 1/5 or 1/10 of the vector sum from 2, and redo the computations from 2 for the new point
- If the new point needs a smaller circle than the old, make the new point the current point, otherwise, split the difference, reduce the size of the proposed step (say half it).
- goto 3
You're done when it stops[+] converging.
Nikie poked at it until...
As requested clarifying step two. Call the position to be tested \vec{P}
(a vector quantity).[++] Call the centers of each circle \vec{p}_i
(also vector quantities) and the radius of each circle is r_i
. Form the sum \sum_i=1^n \hat{p_i - P}*|(p_i-P)|+r_i)
.[+++] Each element of the sum points in the direction from the current evaluation point towards the center of the circle in question, but is longer by r_i
. The sum itself it a vector quantity.
The radius R
need to enclose all the circle from P
is the max(|p_i-P|_r_i)
.
Pathological case
I don't think the particular case nikie's brought up is a problem, but it has put me onto a case where this algorithm fails. The failure is one of failing to improve a solution, rather than one of diverging, but still...
Consider four circles all of radius 1 positioned at
(-4, 1)
(-5, 0)
(-4, 1)
( 5, 0)
and a starting position of (-1, 0)
. Symmetric by design so that all distances lie along the x axis.
The correct solution is (0, 0)
with radius 6, but the vector calculated in step 2 be about ::calculates furiously:: (-.63, 0)
, pointing in the wrong direction resulting in never finding the improvement towards the origin.
Now, the algorithm above would actual pick (-2, 0)
for the starting point, which gives an initial vector sum of ::calculates furiously:: about +1.1. So, a bad choice of step size on (3) would result in a less than optimal solution. ::sigh::
Possible solution:
- In (3) throw a random fraction between (say +1/5 and -1/5) possibly weighted towards the positive size.
- In (4) if the step is rejected, simply return to step three without altering the step size limits.
However, at this point it is not much better than a pure random walk, and you don't have an easy condition for knowing when it has converged. Meh.
[+] Or slows to your satisfaction, of course.
[++] Using latex notation.
[+++] Here \hat{}
means the normalized vector pointing in the same direction as the argument.