views:

256

answers:

3

I just watched a Google tech talk video covering "Polyworld" (found here) and they talk about breeding two neural networks together to form offspring. My question is, how would one go about combining two neural networks? They seem so different that any attempt to combine them would simply form a third, totally unrelated network. Perhaps I'm missing something, but I don't see a good way to take the positive aspects of two separate neural networks and combine them into a single one. If anyone could elaborate on this process, I'd appreciate it.

+3  A: 

Neural networks aren't (probably) in this case arbitrary trees. They are probably networks with a constant structure, i.e. same nodes and connections, so 'breeding' them would involve 'averaging' the weights of nodes. You could average the weights for each pair of nodes in the two corresponding nets to produce the 'offspring' net. Or you could use a more complicated function dependent on ever-further sets of neighboring nodes – the possibilities are Vast. My answer is incomplete if the assumption about the fixed structure is false or unwarranted.

Kenny Evitt
I appreciate your feedback. The idea of averaging is interesting if you can count on a fixed structure. From the video, it appeared that the structure was not fixed, because nodes can be added/removed as mutations. However, this is a good starting place.
Jake
Ahh – from a description of Polyword [http://www.beanblossom.in.us/larryy/polyworld.html], it seems that the neural networks are produced, dynamically by another 'genetic' code possessed by each organism in the Polyworld system. It is these genetic codes that are combined during 'breeding' and it is the output of that combination that determines the neural network of the offspring.
Kenny Evitt
I will check out the link, thanks!
Jake
@MusiGenesis – I can't upvote you (yet), but your answer is correct and matches my own (amended) answer [see above comment].
Kenny Evitt
@Kenny: interesting link, but your link is broken (I had to remove a "]" from the URL).
MusiGenesis
@Kenny: thanks. Normally downvotes don't bother me, but the bizarrely inexplicable ones do. It's not like this question ("how would one go about combining two neural networks?") has an easy, simple answer anyway.
MusiGenesis
The answer posted by @mjv provides even more details about Polyworld and other AL/ALife (Artificial Life) software – but the answer to the original (and current) question is still that the neural networks are produced by other code and it is chunks of that other code that are combined to produce offspring.
Kenny Evitt
Just in case this question hadn't already been answered to death: a concrete answer will depend on (a) the nature (e.g. data structures, etc.) of the genetic code that is responsible for variation in the neural networks; and (b) the algorithm by which the genetic code for two organisms are combined.Based on the answer from @mjv, apparently the genetic code itself is of a regular (fixed) structure, i.e. specific 'genes' (code chunks) encode specific attributes of the neural network.
Kenny Evitt
+4  A: 

They wouldn't really be breeding two neural networks together. Presumably they have a variety of genetic algorithm that produces a particular neural network structure given a particular sequence of "genes". They would start with a population of gene sequences, produce their characteristic neural networks, and then expose each of these networks to the same training regimen. Presumably, some of these networks would respond to the training better than some others (i.e. they would be more easily "trainable" to achieve the desired behavior). They would then take the genetic sequences that produced the best "trainees", cross-breed them with each other, produce their characteristic neural networks, which would then be exposed to the same training regimen. Presumably, some of these neural networks in the second generation would be even more trainable than those from the first generation. These would become the parents of the third generation, and so on and so forth.

MusiGenesis
Who the #$%#$ downvoted this? This makes me pretty mad, given that I've actually performed this process myself numerous times. Please explain yourself.
MusiGenesis
Thanks for the feedback, but I don't think this is how the Polyworld application works. It is more like a game where each "character" is controlled by a neural network. They move, eat, mate and die. Mating produces a new character that is a combination of the previous two.
Jake
The description of the relationship between "genes" and "neural nets" in Polyworld is remarkably similar to what I describe in my answer here.
MusiGenesis
Sorry, from what Kenny says above it sounds like this is the right idea. Thanks!
Jake
in fewer words, we are using GA to learn the structure of the neural network (it is still trained as usual with backpropagation to update the weights).
Amro
+6  A: 

Neither response so far is true to the nature of Polyworld!...

They both describe a typical Genetic Algorithm (GA) application. While GA incorporates some of the elements found in Polyworld (breeding, selection), GA also implies some form of "objective" criteria aimed at guiding evolution towards [relatively] specific goals.

Polyworld, on the other hand is a framework for Artificial Life (ALife). With ALife, the survival of individual creatures and their ability to pass their genes to other generations, is not directed so much by their ability to satisfy a particular "fitness function", but instead it is tied to various broader, non-goal-oriented, criteria, such as the ability of the individual to feed itself in ways commensurate with its size and its metabolism, its ability to avoid predators, its ability to find mating partners and also various doses of luck and randomness.

Polyworld's model associated with the creatures and their world is relatively fixed (for example they all have access to (though may elect not to use) various basic sensors (for color, for shape...) and various actuators ("devices" to eat, to mate, to turn, to move...) and these basic sensorial and motor functions do not evolve (as it may in nature, for example when creatures find ways to become sensitive to heat or to sounds and/or find ways of moving that are different from the original motion primitives etc...)

On the other hand, the brain of creatures has structure and connections which are both the product of the creature's genetic make-up ("stuff" from its ancestors) and of its own experience. For example the main algorithm used to determine the strength of connections between neurons uses Hebbian logic (i.e. fire-together, wire-together) during the lifetime of the creature (early on, I'm guessing, as the algorithm often has a "cooling" factor which minimize its ability to change things in a big way, as times goes by). It is unclear if the model includes some form of Lamarkian evolution, whereby some of the high-level behaviors are [directly] passed on through the genes, rather than being [possibly] relearnt with each generation (on the indirect basis of some genetically passed structure).

The salient difference between ALife and GA (and there are others!) is that with ALife, the focus is on observing and fostering in non-directed ways, emergent behaviors -whatever they may be- such as, for example, when some creatures evolve a makeup which prompts them to wait nearby piles of green food and wait for dark green creatures to kill them, or some creatures may start collaborating with one another, for example by seeking each other's presence for other purposes than mating etc. With GA, the focus is on a particular behavior of the program being evolved. For example the goal may be to have the program recognize edges in a video image, and therefore evolution is favored in this specific direction. Individual programs which perform this task better (as measure with some "fitness function") are favored with regards to evolution.

Another less obvious but important difference regards the way creatures (or programs in the case of GA) reproduce themselves. With ALife, individual creatures find their own mating partners, at random at first although, after some time they may learn to reproduce only with creatures exhibiting a particular attribute or behavior. With GA, on the other hand, "sex" is left to the GA framework itself, which chooses, for example, to preferably cross-breed individuals (and clones thereof) which score well on the fitness function (and always leaving room for some randomness, lest the solution search stays stuck at some local maxima, but the point is that the GA framework decides mostly who has sex with whom)...

Having clarified this, we can return to the OP's original question...
... how would one go about combining two neural networks? They seem so different that any attempt to combine them would simply form a third, totally unrelated network. ...I don't see a good way to take the positive aspects of two separate neural networks and combine them into a single one...
The "genetic makeup" of a particular creature affects parameters such as the size of the creature, its color and such. It also includes parameters associated with the brain, in particular its structure: the number of neurons, the existence of connection from various sensors (eg. does the creature see the Blue color very well ?) the existence of connections towards various actuators (eg. does the creature use its light?). The specific connections between neurons and the relative strength of these may also be passed in the genes, if only to serve as initial values, to be quickly changed during brain learning phase.
By taking two creatures, we [nature!] can select in a more or less random fashion, which parameter come from the first creature and which come from the other creature (as well as a few novel "mutations" which come from neither parents). For example if the "father" had many connections with red color sensor, but the mother didn't the offspring may look like the father in this area, but also get his mother's 4 neuron-layers structure rather than father's 6 neuron-layers structure.
The interest of doing so is to discover new capabilities from the individuals; in the example above, the creature may now better detect red colored predators, and also process info more quickly in its slightly simpler brain (compared with the father's). Not all offspring are better equipped than their parents, such weaker individuals, may disappear in short order (or possibly and luckily survive long enough, to provide, say, their fancy way of moving and evading predators, even though their parent made them blind or too big or whatever... The key thing again: is not to be so worried about immediate usefulness of a particular trait, only to see it play in the long term.

mjv
+1 Good answer. And well spoken.
Robert Massaioli