views:

116

answers:

5

I am trying to solve a problem using genetic algorithms.

The problem is to find the set of integral and real values that optimizes a function.

I need to represent the problem using a binary string (simply because I understand the concept of crossover/mutation etc much better when applied to binary string chromosomes).

A candidate solution S would be the set {I1, I2, ... IN, R1, R2, RM }

Where the I variables are integers and the R variables are floating point numbers.

I want to be able to transform the candidate solution S into a binary string, but I don't know how to encode the floating point numbers.

Any ideas on how to encode the set S into a chromosome?

Although the solution is supposed to be language agnostic, my prefered choice of language (in decreasing order of preference for this particular task) is:

Python, C++, C

BTW, I am coding the problem using Pyevolve

A: 

You can pack binary data into buffers with the facilities provided in the struct module. See here: http://docs.python.org/library/struct.html

That said, I personally love Python, BUT: if you want to repeatedly and efficiently take a set of integers and floating point numbers, treat it as one big bitsring, and apply bitwise mutation to it, I don't think Python is the best choice for a language. This is much more straightforward (and fast) in lower-level languages -- I'd go for C.

Good luck with the algorithm!

Santiago Lezica
+1  A: 

No, I think that binary representation is wrong for your problem. Your basic data is not binary, so, why use binary? Do mutation and crossover on real and integer numbers, not on their binary representation.

Simplest crossover: first parent: ABCDE where A, B, ... are floating points numbers, second parents MNOPQ. Choose randomly D, first spring: ABCDQ, second: MNOPE.

wiso
The problem (as I stated), was that I don't know how to do the crossover and mutation on float variables. Could you show by means of an example (or even a link), how to encode such a chromosome, and how crossover and mutation can be applied to such a chromosome?
skyeagle
You could define the cross over in any way you see fit. I too would suggest not making a binary string, since that could potentially produce much worse results since a slice would likely occur "inside" a integer or floating point variable.
kigurai
A: 

Use IEEE754 representation for floating-point numbers and two's complement representation for integers. Or, use integers and floating-point numbers, safe in the knowledge that behind the scenes your computer is already using these binary representations.

High Performance Mark
A: 

If I understand the problem correctly, this is completely language agnostic. You should be able to represent the floating point number in the standardized IEEE representation. Here's a tutorial.

Once you have that representation you don't what is what, you just apply whatever crossover (single, double point whatever) to your bits.

JohnIdol
A: 

i would suggest rethinking if you actually need the bit string representation.. the conversion from float to bit and back is a bit much maybe. if you could just stay with floating point values i.e. a single solution candidate is an array of N floats you can pass that easily to your evaluation function (objective function). then use crossover methods like simulated binary crossover (SBX, http://www.slideshare.net/paskorn/self-adaptive-simulated-binary-crossover-presentation) which mimics the effects you'd get after converting your floats to a binary representation and performing crossover on that. the results of SBX are pretty good and also the analogy to what woudl happen if you'd deal with bit strings becomes pretty clear after a while.. it looks like much in this slideshow.. but it all comes down to just a few lines of implementing the sbx crossover.

mawirth