views:

117

answers:

4

I did a little GP (note:very little) work in college and have been playing around with it recently. My question is in regards to the intial run settings (population size, number of generations, min/max depth of trees, min/max depth of initial trees, percentages to use for different reproduction operations, etc.). What is the normal practice for setting these parameters? What papers/sites do people use as a good guide?

A: 

Take a look at Koza's voluminous tomes on these matters.

WhirlWind
I've read Koza's first book on the matter. I guess I figured since 1992 there has been more research done on appropriate starting population, what reproduction methods perform better etc.
cmptrer
A: 

When I started looking into Genetic Algorithms I had the same question.

I wanted to collect data variating parameters on a very simple problem and link given operators and parameters values (such as mutation rates, etc) to given results in function of population size etc.

Once I started getting into GA a bit more I then realized that given the enormous number of variables this is a huge task, and generalization is extremely difficult.

talking from my (limited) experience, if you decide to simplify the problem and use a fixed way to implement crossover, selection, and just play with population size and mutation rate (implemented in a given way) trying to come up with general results you'll soon realize that too many variables are still into play because at the end of the day the number of generations after which statistically you will get a decent result (whatever way you wanna define decent) still obviously depend primarily on the problem you're solving and consequently on the genome size (representing the same problem in different ways will obviously lead to different results in terms of effect of given GA parameters!).

It is certainly possible to draft a set of guidelines - as the (rare but good) literature proves - but you will be able to generalize the results effectively in statistical terms only when the problem at hand can be encoded in the exact same way and the fitness is evaluated in a somehow an equivalent way (which more often than not means you're ealing with a very similar problem).

JohnIdol
+1  A: 

You'll find that this depends very much on your problem domain - in particular the nature of the fitness function, your implementation DSL etc.

Some personal experience:

  • Large population sizes seem to work better when you have a noisy fitness function, I think this is because the growth of sub-groups in the population over successive generations acts to give more sampling of the fitness function. I typically use 100 for less noisy/deterministic functions, 1000+ for noisy.
  • For number of generations it is best to measure improvements in the fitness function and stop when it meets your target criteria. I normally run a few hundred generations and see what kind of answers are coming out, if it is showing no improvement then you probably have an issue elsewhere.
  • Tree depth requirements are really dependent on your DSL. I sometimes try to do an implementation without explicit limits but penalise or eliminate programs that run too long (which is probably what you really care about....). I've also found total node counts of ~1000 to be quite useful hard limits.
  • Percentages for different mutation / recombination operators don't seem to matter all that much. As long as you have a comprehensive set of mutations, any reasonably balanced distribution will usually work. I think the reason for this is that you are basically doing a search for favourable improvements so the main objective is just to make sure the trial improvements are reasonably well distributed across all the possibilities.
mikera
A: 

Why don't you try using a genetic algorithm to optimise these parameters for you? :)

Any problem in computer science can be solved with another layer of indirection (except for too many layers of indirection.)

-David J. Wheeler

Drew Noakes
No upvote as it's not a proper answer, but it did make me laugh. :-)
Donal Fellows