views:

70

answers:

3

There are many papers about ranged combat artificial intelligences, like Killzones's (see this paper), or Halo. But I've not been able to find much about a fighting IA except for this work, which uses neural networs to learn how to fight, which is not exactly what I'm looking for.

Occidental AI in games is heavily focused on FPS, it seems! Does anyone know which techniques are used to implement a decent fighting AI? Hierarchical Finite State Machines? Decision Trees? They could end up being pretty predictable.

+2  A: 

In our research labs, we are using AI planning technology for games. AI Planning is used by NASA to build semi-autonomous robots. Planning can produce less predictable behavior than state machines, but planning is a highly complex problem, that is, solving planning problems has a huge computational complexity.

AI Planning is an old but interesting field. Particularly for gaming only recently people have started using planning to run their engines. The expressiveness is still limited in the current implementations, but in theory the expressiveness is limited "only by our imagination".

Russel and Norvig have devoted 4 chapters on AI Planning in their book on Artificial Intelligence. Other related terms you might be interested in are: Markov Decision Processes, Bayesian Networks. These topics are also provided sufficient exposure in this book.

If you are looking for some ready-made engine to easily start using, I guess using AI Planning would be a gross overkill. I don't know of any AI Planning engine for games but we are developing one. If you are interested in the long term, we can talk separately about it.

Amit Kumar
Right, something like that was used in Fallout3, as far as I know. But AI Planning doesn't seem to be the answer here, as a fighter does not really 'plan' anything, right? There's no goal like 'uppercut' or 'hit enemy'... is there? It seems something more reactive, combined with a certain fighting style depending on the character (big guys are slow, they tend to block instead of evade and those things...). Nevertheless, I'll have a deeper look at AI planning.
Notnasiul
+1  A: 

You seem to know already the techniques for planning and executing. Another thing that you need to do is predict the opponent's next move and maximize the expected reward of your response. I wrote a blog article about this: http://tinyurl.com/3x4lxao and http://tinyurl.com/3yojyom . The game I consider is very simple, but I think the main ideas from Bayesian decision theory might be useful for your project.

pberkes
Thank you a lot! And nice blog, I'm following it now ;) (I like that Pacman capture the flag game!)
Notnasiul
Thank you! By the way, we plan to have an open-source variant of the Pacman game by next February.
pberkes
+1  A: 

Another route to consider is the so called Ghost AI as described here & here. As the name suggests you basically extract rules from actual game play, first paper does it offline and the second extends the methodology for online real time learning.

Check out also the guy's webpage, there are a number of other papers on fighting games that are interesting.

Eugen Constantin Dinca
That Ghost AI is pretty interesting, but not what I was looking for, though. I wanted, precisely, a manner of giving the AI its own 'personalty'. I've been coding this weekend and the combination of a FSM with some randomness based on 'technique weights' (offensive, defensive, blocker/avoider...) seems to be appropriate. And that guy's webpage is a design crime ;)
Notnasiul
The whole purpose of the Ghost AI was to have an opponent with it's own personality even if it got trained by imitating another player. Your approach sounds interesting, not sure how much tweaking it needs.
Eugen Constantin Dinca