views:

657

answers:

8

I have made a quite few genetic algorithms; they work (they find a reasonable solution quickly). But I have now discovered TDD. Is there a way to write a genetic algorithm (which relies heavily on random numbers) in a TDD way?

To pose the question more generally, How do you test a non-deterministic method/function. Here is what I have thought of:

  1. Use a specific seed. Which wont help if I make a mistake in the code in the first place but will help finding bugs when refactoring.

  2. Use a known list of numbers. Similar to the above but I could follow the code through by hand (which would be very tedious).

  3. Use a constant number. At least I know what to expect. It would be good to ensure that a dice always reads 6 when RandomFloat(0,1) always returns 1.

  4. Try to move as much of the non-deterministic code out of the GA as possible. which seems silly as that is the core of it's purpose.

Links to very good books on testing would be appreciated too.

A: 

You could write a redundant neural network to analyze the results from your algorithm and have the output ranked based on expected outcomes. :)

Break your method down as much as your can. Then you can also have a unit test around just the random part to check the range of values. Even have the test run it a few times to see if the result changes.

Matthew Whited
+7  A: 

Seems to me that the only way to test its consistent logic is to apply consistent input, ... or treat each iteration as a single automaton whose state is tested before and after that iteration, turning the overall nondeterministic system into testable components based on deterministic iteration values.

For variations/breeding/attribute inheritance in iterations, test those values on the boundaries of each iteration and test the global output of all iterations based on known input/output from successful iteration-subtests ...

Because the algorithm is iterative you can use induction in your testing to ensure it works for 1 iteration, n+1 iterations to prove it will produce correct results (regardless of data determinism) for a given input range/domain and the constraints on possible values in the input.

Edit I found this strategies for testing nondeterministic systems which might provide some insight. It might be helpful for statistical analysis of live results once the TDD/development process proves the logic is sound.

Aiden Bell
Thanks for the answer. I had hoped of some silver bullet but I guess this just isn't something easy to test. If I pick the random numbers carefully I can test each execution path. I will also do a test with a known fitness-landscape so I can see how well it is performing.
James Brooks
@James, just remember with nondeterministic algorithms that there is a marked difference between 'testing the logic' and testing 'expected results'. Do one, then the other. If the first is broken, the second is meaningless.
Aiden Bell
Current download page of " _Optimal strategies for testing nondeterministic systems_ " paper here: http://research.microsoft.com/apps/pubs/default.aspx?id=77437
MaD70
@MaD70 - Thanks ;) Link updated
Aiden Bell
A: 

A test that the algorithm gives you the same result for the same input could help you but sometimes you will make changes that change the result picking behavior of the algorithm.

I would make the most effort to have a test that ensures that the algorithm gives you a correct result. If the algorithm gives you a correct result for a number of static seeds and random values the algorithm works or is not broken through the changes made.

Another chance in TDD is the possibility to evaluate the algorithm. If you can automatically check how good a result is you could add tests that show that a change hasn't lowered the qualities of your results or increased your calculating time unreasonable.

If you want to test your algorithm with many base seeds you maybe want to have to test suits one suit that runs a quick test for starting after every save to ensure that you haven't broken anything and one suit that runs for a longer time for a later evaluation

Janusz
+4  A: 

I would test random functions by testing them a number of times and analyzing whether the distribution of return values meets the statistical expectations (this involves some statistical knowledge).

Svante
Would this not only evaluate the distribution of values around normal, rather than the fitness of the algorithm to distribute around the normal in the correct way? A broken algorithm will still be broken if you run it twice. If it did search results, it would be like checking the results contained the keywords as validation of search order.
Aiden Bell
I didn't say normal distribution, I said that the distribution should meet the statistical expecations, i.e., if you need a random function to return, for example, random values corresponding to a boltzmann distribution, you should check whether a sufficiently high number of test runs form such a distribution.
Svante
I see. I think that might be a bit error-prone for TDD. I even think that graph-based statistical analysis as in the paper I linked to should not be the *first* port of call for unit/functional testing of the logic rather than results on live data.
Aiden Bell
+1  A: 

If you're talking TDD, I would say definitely start out by picking a constant number and growing your test suite from there. I've done TDD on a few highly mathematical problems and it helps to have a few constant cases you know and have worked out by hand to run with from the beginning.

W/R/T your 4th point, moving nondeterministic code out of the GA, I think this is probably an approach worth considering. If you can decompose the algorithm and separate the nondeterministic concerns, it should make testing the deterministic parts straightforward. As long as you're careful about how you name things I don't think that you're sacrificing much here. Unless I am misunderstanding you, the GA will still delegate to this code, but it lives somewhere else.

As far as links to very good books on (developer) testing my favorites are:

cwash
A: 

All of your functions should be completely deterministic. This means that none of the functions you are testing should generate the random number inside the function itself. You will want to pass that in as a parameter. That way when your program is making decisions based on your random numbers, you can pass in representative numbers to test the expected output for that number. The only thing that shouldn't be deterministic is your actual random number generator, which you don't really need to worry too much about because you shouldn't be writing this yourself. You should be able to just assume it works as long as its an established library.

That's for your unit tests. For your integration tests, if you are doing that, you might look into mocking your random number generation, replacing it with an algorithm that will return known numbers from 0..n for every random number that you need to generate.

Doug R
+1  A: 

I wrote a C# TDD Genetic Algorithm didactic application: http://code.google.com/p/evo-lisa-clone/

Let's take the simplest random result method in the application: PointGenetics.Create, which creates a random point, given the boundaries. For this method I used 5 tests, and none of them relies on a specific seed:

http://code.google.com/p/evo-lisa-clone/source/browse/trunk/EvoLisaClone/EvoLisaCloneTest/PointGeneticsTest.cs

The randomness test is simple: for a large boundary (many possibilities), two consecutive generated points should not be equal. The remaining tests check other constraints.

Jader Dias
Thanks for the response, I'll check out the code later. I've done the testing now and used a vaguely similar approach to you, I think. I tested a variety of things I knew should happen when I gave it specific values for my 'random' numbers. I then checked the distribution was approx' what I expected over 10,000 trials. Not perfect but It will do.
James Brooks
A: 

I would highly suggest looking into using mock objects for your unit test cases (http://en.wikipedia.org/wiki/Mock_object). You can use them to mock out objects that make random guesses in order to cause you to get expected results instead.

IDreamOf362