views:

106

answers:

2

The idea is to develop my own bot to test the game's behavioral rules, in my case I have set on using Unreal Tournament (1999 or 2004 version) as a proof of concept. Initially, I would like to test the CTF type of mode. What I set to achieve is to have a bot, preferably in Java, that is fully controllable, that is, there is no AI but mechanism for injecting set of commands, jump, run to this point on the map etc. I want to use evolutionary algorithm that will work in real-time and evolve a sequence of movements that will take this bot from the initial position at base, to the opponent's flag and back home. For simplicity I don't want it to do any complex actions like shooting but rather avoid any conflicts. The main actions would be jump, run, rotate, move to. I am currently connected to the game through gamebots api and seek some tutorials/guidance on actually writing the bot and hooking it up to genetic algorithm framework such as the Watchmaker Framework. I have few conditions in mind for fitness function and I have read extensively around the subject and run into things such as bot pathing etc. I hope I am being clear explaining what I want to achieve. Do you know of any tutorials or readings that may help me?

+2  A: 

Just a hint: look around at aigamedev.com I'm not sure if anyone there still uses the UT 1 engine, though.

If you don't mind the actual game engine, you'll surely find some tutorials related to the open sourced ID Engines like Quake 3's.

tweber
Thanks for the site tweber, it proves to be a great source of information, I really feel that small part of my solution is connected with navigation mesh along which action sequences will need to be evolved. I want to hunt down a good tutorial that explains from the bottom-up what are the steps for writing functional bot for any FPS type of game.
lennyks
+2  A: 

Keep in mind that for real time genetic algorithms to be applicable you are generally better off starting by gathering a consistent training set of adversarial strategies (better if from different people) and using this set to evaluate fitness in the background, simulating games in fast-forward, to obtain in reasonable time a number of decent strategies you can start with (it's unthinkable to get humans to evaluate fitness of random strategies - would most likely take too long). A classic example implemented like this is the game of checkers, but also more complex games can leverage the same strategy (there is a famous example of futuristic naval battle where GA aided strategy defeated human opponents - can't remember details but I'll look it up and edit). Once you evolved a set of decent strategies, you could fire off the real time GA so that it keeps learning from humans.

Also keep into account this could be an extremely slow process and there might be no value in using real time genetic algorithms, in the sense that you're better off collecting strategies from human opponents and running evolution in the background, so that the next time the same opponent plays you might have evolved a strategy capable of defeating him. Unless you've got loads of people playing - in that case it might make sense, but if the targets is to challenge people with increasingly good bots that's what I would do. Having people evaluating fitness of tentative strategies could be 1) slow 2) boring for them.

A few papers dealing with real time genetic algorithms (worth skimming through the abstract and see if you're interested):

Also worth mentioning, something similar has been done before by using neuroevolution (neural networks evolution with GAs, NEAT is a good example and has been used for FPS afaik) but the considerations in terms of training set still stand valid.

JohnIdol
lennyks
That's a very interesting paper, and the encoding you have in mind sounds good, but the paper talks about generating test cases to spot bugs, not real time machine learning to generate better game strategies. I still think you should collect a training set and evolve your strategy 'offline' against that training set, or at least that's what I would opt for.
JohnIdol
You are absolutely right in terms of real time learning, my mistake. In my initial attempt I was considering random sequence that would improve thanks to fitness function that measures few aspects of the bot behaviour. I wanted to kind of explore the large space of solutions to find the optimum one. However, I think I may try to take up your suggestion and play the game and try to record some sequences for evolution. If you have any more views or suggestion please post them. Again, cheers for answers.
lennyks