The idea is to develop my own bot to test the game's behavioral rules, in my case I have set on using Unreal Tournament (1999 or 2004 version) as a proof of concept. Initially, I would like to test the CTF type of mode. What I set to achieve is to have a bot, preferably in Java, that is fully controllable, that is, there is no AI but mechanism for injecting set of commands, jump, run to this point on the map etc. I want to use evolutionary algorithm that will work in real-time and evolve a sequence of movements that will take this bot from the initial position at base, to the opponent's flag and back home. For simplicity I don't want it to do any complex actions like shooting but rather avoid any conflicts. The main actions would be jump, run, rotate, move to. I am currently connected to the game through gamebots api and seek some tutorials/guidance on actually writing the bot and hooking it up to genetic algorithm framework such as the Watchmaker Framework. I have few conditions in mind for fitness function and I have read extensively around the subject and run into things such as bot pathing etc. I hope I am being clear explaining what I want to achieve. Do you know of any tutorials or readings that may help me?
Just a hint: look around at aigamedev.com I'm not sure if anyone there still uses the UT 1 engine, though.
If you don't mind the actual game engine, you'll surely find some tutorials related to the open sourced ID Engines like Quake 3's.
Keep in mind that for real time genetic algorithms to be applicable you are generally better off starting by gathering a consistent training set of adversarial strategies (better if from different people) and using this set to evaluate fitness in the background, simulating games in fast-forward, to obtain in reasonable time a number of decent strategies you can start with (it's unthinkable to get humans to evaluate fitness of random strategies - would most likely take too long). A classic example implemented like this is the game of checkers, but also more complex games can leverage the same strategy (there is a famous example of futuristic naval battle where GA aided strategy defeated human opponents - can't remember details but I'll look it up and edit). Once you evolved a set of decent strategies, you could fire off the real time GA so that it keeps learning from humans.
Also keep into account this could be an extremely slow process and there might be no value in using real time genetic algorithms, in the sense that you're better off collecting strategies from human opponents and running evolution in the background, so that the next time the same opponent plays you might have evolved a strategy capable of defeating him. Unless you've got loads of people playing - in that case it might make sense, but if the targets is to challenge people with increasingly good bots that's what I would do. Having people evaluating fitness of tentative strategies could be 1) slow 2) boring for them.
A few papers dealing with real time genetic algorithms (worth skimming through the abstract and see if you're interested):
- GA for complex real time scheduling
- Aircraft landing scheduling with real time GA
- Musical Improvisation with real time GA
- Recommendation engine with real time GA
Also worth mentioning, something similar has been done before by using neuroevolution (neural networks evolution with GAs, NEAT is a good example and has been used for FPS afaik) but the considerations in terms of training set still stand valid.