views:

145

answers:

5

Here's the setup:

I have an algorithm that can succeed or fail. I want it to succeed with highest probability possible. Probability of success depends on some parameters (and some external circumstances):

struct Parameters {
  float param1;
  float param2;
  float param3;
  float param4;
  // ...
};

bool RunAlgorithm (const Parameters& parameters) {
  // ...
  // P(return true) is a function of parameters.
}

How to (automatically) find best parameters with a smallest number of calls to RunAlgorithm ? I would be especially happy with a readl library.

If you need more info on my particular case:

  • Probability of success is smooth function of parameters and have single global optimum.
  • There are around 10 parameters, most of them independently tunable (but some are interdependent)
  • I will run the tunning overnight, I can handle around 1000 calls to Run algorithm.

Clarification:

Best parameters have to found automatically overnight, and used during the day. The external circumstances change each day, so computing them once and for all is impossible.

More clarification:

RunAlgorithm is actually game-playing algorithm. It plays a whole game (Go or Chess) against fixed opponent. I can play 1000 games overnight. Every night is other opponent.

I want to see whether different opponents need different parameters.

RunAlgorithm is smooth in the sense that changing parameter a little does change algorithm only a little.

Probability of success could be estimated by large number of samples with the same parameters. But it is too costly to run so many games without changing parameters.

I could try optimize each parameter independently (which would result in 100 runs per parameter) but I guess there are some dependencies.

The whole problem is about using the scarce data wisely.

Games played are very highly randomized, no problem with that.

+3  A: 

Maybe you are looking for genetic algorithms.

Frank Bollack
That is exactly what genetic algorithms are made for.
monksy
It's not really an answer. Genetic algorithms are such a generic tool.You have to say, how to apply them.
Łukasz Lew
The problem with GA is that my algorithm returns true or false, not a fitness.
Łukasz Lew
Thanks for the downvote. If you do not provide some more information about your problem (how is the success calculated, what does it depend on) you will hardly get an precise answer.Is there a way, you can derive a "fitness" from a parameter set?
Frank Bollack
I guess I have the right to downvote if I think the answer is not usefull / misleading.There is no direct fitness, because this is not GA problem.You can evaluate it bu running RunAlgorithm 1000 times and calculate percentage of success. But it is way to slow.The whole problem is about how to make best use of scarce data.
Łukasz Lew
Of course you have the right to downvote and you should always do so, if you feel, the answer is wrong or misleading.But you might also think about your question if the answers are not what you are expecting.
Frank Bollack
As you can see I'm doing my best to answer all question about my question.
Łukasz Lew
A: 

Not sure if I understood correctly...

If you can choose the parameters for your algorithm, does it mean that you can choose it once for all?

Then, you could simply:

  • have the developper run all/many cases only once, find the best case, and replace the parameters with the best value
  • at runtime for your real user, the algorithm is already parameterized with the best parameters


Or, if the best values change for each run ... Are you looking for Genetic Algorithms type of approach?

KLE
+1  A: 

Why not allow the program fight with itself? Take some vector v (parameters) and let it fight with v + (0.1,0,0,0,..,0), say 15 times. Then, take the winner and modify another parameter and so on. With enough luck, you'll get a strong player, able to defeat most others.

Previous answer (much of it is irrevelant after the question was edited):

With these assumptions and that level of generality, you will achieve nothing (except maybe an impossiblity result).

Basic question: can you change the algorithm so that it will return probability of success, not the result of a single experiment? Then, use appropriate optimization technique (nobody will tell you which under such general assumptions). In Haskell, you can even change code so that it will find the probability in simple cases (probability monad, instead of giving a single result. As others mentioned, you can use a genetic algorithm using probability as fitness function. If you have a formula, use a computer algebra system to find the maximum value.

Probability of success is smooth function of parameters and have single global optimum.

Smooth or continuous? If smooth, you can use differential calculus (Lagrange multipliers?). You can even, with little changes in code (assuming your programming language is general enough), compute derivatives automatically using automatic differentiation.

I will run the tunning overnight, I can handle around 1000 calls to Run algorithm.

That complex? This will allow you to check two possible values (210=1024), out of many floats. You won't even determine order of magnitude, or even order of order of magnitude.

There are around 10 parameters, most of them independently tunable (but some are interdependent)

If you know what is independent, fix some parameters and change those that are independent of them, like in divide-and-conquer. Obviously it's much better to tune two algorithms with 5 parameters.

I'm downvoting the question unless you give more details. This has too much noise for an academic question and not enough data for a real-world question.

sdcvvc
A: 

The answer to this question depends on:

  1. Parameter range. Can your parameters have a small or large range of values?
  2. Game grading. Does it have to be a boolean, or can it be a smooth function?

One approach that seems natural to this problem is Hill Climbing.

A possible way to implement would be to start with several points, and calculate their "grade". Then figure out a favorable direction for the next point, and try to "ascend".

The main problems that I see in this question, as you presented it, is the huge range of parameter values, and the fact that the result of the run is boolean (and not a numeric grade). This will require many runs to figure out whether a set of chosen parameters are indeed good, and on the other hand, there is a huge set of parameters values yet to check. Just checking all directions will result in a (too?) large number of runs.

Anna
+1  A: 

The main problem you have is that, with ten parameters, 1000 runs is next to nothing, given that, for each run, all you have is a true/false result rather than a P(success) associated with the parameters.

Here's an idea that, on the one hand, may make best use of your 1000 runs and, on the other hand, also illustrates the the intractability of your problem. Let's assume the ten parameters really are independent. Pick two values for each parameter (e.g. a "high" value and a "low" value). There are 1024 ways to select unique combinations of those values; run your method for each combination and store the result. When you're done, you'll have 512 test runs for each value of each parameter; with the independence assumption, that might give you a decent estimate on the conditional probability of success for each value. An analysis of that data should give you a little information about how to set your parameters, and may suggest refinements of your "high" and "low" values for future nights. The back of my mind is dredging up ANOVA as a possibly useful statistical tool here.

Very vague advice... but, as has been noted, it's a rather vague problem.

Richard Dunlap