views:

1336

answers:

12

In the old days of gaming, I'm sure simple switch/case statements (in a sense) would have done just fine for most of the game "AI." However, as games have become increasing complex, especially at the 3d leap, more complex algorithms are needed. My question is, are actual machine learning algorithms (like reinforcement learning) used in game AI at this point? Or is that still mostly only in research projects at universities (which I have been exposed to)?

If not actual machine learning algorithms, then what is driving bleeding edge commerical game AI? Is it simply highly complex but static (non-ML) algorithms that are able to cover most of the possibilities? And if so, what actual types of algorithms are used?

I've always been curious about this, thanks!

Edit: After thinking about it some more I can further clarify a bit. How do the agents in the game make decisions? If they are not using actual learning algorithms real-time, was a learning algorithm perhaps used in the development stages to produce a model (static algorithm), and that model is then used to make decisions in the game? Or was a static algorithm for decision making hand-coded in a sense?

+8  A: 

A large number of games just use finite state machines

There are some really good resources on the net on this:

Joel Martinez
And I assume these FSMs are usually designed by hand (rather than generated by an algorithm)?
JoeCool
Yes, they are almost always designed by hand.
Kylotan
+2  A: 

I don't think machine learning is very common in commercial games yet, but one notable example was Black & White, which was all about training a pet, and making the game's inhabitants believe in the player.

Bill the Lizard
Oh boy, how I loved that game! The gesture controls had some quirks, but still...
Vinko Vrsalovic
+1  A: 

I think several games use (more or less complex) neural networks.

http://en.wikipedia.org/wiki/Game_AI

Dario
It does happen occasionally, but it's considered a very-low-bang-for-the-buck technology (I would say rightfully so).
chaos
A: 
Allan Simonsen
I could not find any mention of Spore using genetic algorithms. Can you please provide a link that mentions genetic algorithms? Otherwise this should be -1. Genetic algorithms generally would be too computationally intensive to be useful in video games.
dss539
I disagree: for my Masters thesis I was using GAs in real time to evolve group positions for a squad-based combat game. Typically you don't need an instant result so calculations can be spread out over several frames.
Kylotan
@Kylotan - That's pretty awesome. Any chance you could post a link to your thesis? What sort of fitness function did you use to evaluate the positions?
dss539
Heres http://en.kioskea.net/actualites/spore-computer-game-aliens-coming-to-virtual-life-10462-actualite.php3 a short interview with Will Wright from Maxis: "Creatures pass on virtual genes to their progeny..."
Allan Simonsen
Yeah, calling that GA is a bit of a stretch.
chaos
Allan, you may want to read that wikipedia article you linked to. "Genetic algorithm" has a fairly specific meaning, and Wright's description of passing "virtual genes" does not fit that meaning. Wright does not claim that to be a GA implementation.
dss539
@dss539 - unfortunately it's not in any fit state to put online. But basically I had different fitness functions for different situations (eg. one for setting an ambush, one for hiding, one for maximising offensive capability) and these were basically different weightings of how much terrain the squad could see, how much cover they were in, how close they were to each other, etc. Each frame I'd run a few generations and it would quickly converge towards a reasonable solution which I could start executing, even if it would possibly be replaced by a better solution in a few seconds.
Kylotan
@Kylotan - Thanks for the details. I hadn't thought of using GAs for real-time strategy/tactics decision making. I suppose if you only need a few generations with a small gene pool and a quick running fitness function, a GA could search that solution space fairly well. That's pretty cool, thanks for sharing.
dss539
+5  A: 

A* search and relatives called things like HPA* are probably the most thoroughly grasped AI concept in the game industry. They're usually thought and spoken of as terrain pathfinding algorithms, but occasionally somebody realizes that they can be used for 'pathfinding' within spaces like decision trees, and then things get actually interesting.

chaos
+1  A: 

Alpha-Beta Pruning drives board games, including Chess an simpler games. This is a way to prune a state space to enable efficient searching. Variants of A* search allows for exploration of a board in robot simulations etc. Both are "classical AI" and not machine learning algorithms per se. Samuel's Checkers player and TD-Gammon are examples of using reinforcement learning for playing Checkers and Backgammon, respectively.

Yuval F
+7  A: 

There is no point having ML in games, at least most consumer games anyway, because the AI would very easily become too hard to beat and thus not enjoyable for the player. A lot of the effort in game AI falls into three parts: the first is allowing the computer to cheat. i.e. the AI usually knows where the player is and knows in advance the best routes around the environment. This is necessary otherwise the AI would be be fumbling down dead ends all the time, which isn't great. The other main effort in AI work is to make the NPCs dumb enough for the player to beat. It is quite easy for AI to be written to always beat the player (think of Half Life where you face a team of marines), the hard part is balancing the appearance of AI with playability. The final part is making sure the AI only takes up a limited amount of resources - both in terms of CPU time and memory usage.

Another minus point to using ML is that the state of the AI needs to be stored between sessions, otherwise the AI will have to start from scratch every time. On a PC this isn't a problem, but consoles used to have very limited long term storage which ruled out saving the state information.

An example of AI 'cheating': In Transport Tycoon, the AI companies were never charged for modifying the height of the terrain. I know this because I ported it to the Mac many years back.

The first FPS I did, the AI always headed towards the player, but the direction would be weighted using a random sample from a normal distribution, so most of the time the direction was towards the player, but occasionally the direction was way off - it helped the AI get out of dead ends. This was in the days before there was enough CPU grunt to do A* searching.

Skizz

Skizz
+18  A: 

I think you're overestimating the capabilities of most modern game AI; which is great, because that's exactly what modern game developers are hoping for. They invest time into making the system appear more intelligent than it is, for example by having the AI characters talk about what they're going to do, or by occasionally following pre-set scripts that perform a complex series of tasks.

If not actual machine learning algorithms, then what is driving bleeding edge commerical game AI? Is it simply highly complex but static (non-ML) algorithms that are able to cover most of the possibilities?

There are actually very few possibilities usually. As mentioned in another answer, there is typically a finite state machine at work. eg. A typical enemy in a shooting game may be in one of the following states: idle, alert (they know there is trouble near), hunting (they are seeking an enemy), attacking (they can see the enemy and engage it), and fleeing (they are attempting to escape from an enemy). Transitions between the states can be simple events such as a noise being heard, an opponent being seen, a health value dropping below a certain threshold, etc. Very trivial, really.

Implementation of each state can usually be decomposed into a small number of simple actions, eg. move to position, look in direction, shoot at target, etc. These low level activities are well-documented and widely used. (eg. A* search for pathfinding, vector mathematics for steering and orientation.) All of these building blocks work just as well in 3D as they did in 2D, for the most part.

Additionally, the more complex-looking AI is often scripted, which is to say that the behaviour is pre-programmed in a simple programming language to work in a very specific game situation. Scripts for specific situations can make assumptions about the environment (eg. the location of cover to hide behind, the proximity of allies and enemies, etc) and can provide very specific goals accordingly. More general scripts can be triggered by a set of predetermined event types (eg. Enemy Seen, Ally Killed, Unidentified Noise Heard) and very simple responses written in each case (eg. IF self.health > 75% THEN attackNearestEnemy ELSE fleeToSafety).

...was a learning algorithm perhaps used in the development stages to produce a model (static algorithm), and that model is then used to make decisions in the game?

This is quite common in situations that are modelling vehicles, such as racing games - you might feed an algorithm the race track as a series of points and inputs based on those points, and get a learning algorithm to develop a strategy that completes the laps in the best time. Eventually you can ship that with the game. But that is a simple mapping from a small number of inputs (angle of road ahead, proximity of obstacles, current speed) to a small number of outputs (desired speed, desired steering), which lends itself well to machine learning. Games that simulate human behaviour can rarely fall back on these simple function approximations, so we tend to rely on manual simulation of distinct behavioural states.

Sometimes there could be a hybrid approach, where the finite state machine transitions could be trained to be more optimal, but that is unlikely to figure in very many games in practice, since realistic transitions are usually trivial for a designer to implement.

Kylotan
+3  A: 

If you're interested in learning about various AIs, I had fun with Xpilot-AI. The "star" bot built by the people running the project was a fixed rule-based controller, which was in turn the product of a genetic algorithm. Here's how it went:

  • They built a basic rule-based bot (if we're about to hit the wall, turn left and set thrusters to full...)
  • They broke the bot controller into parameters (So instead of a fixed "about to hit the wall" conditional, you'd break it into "distance to wall < X," "our heading is within Y degrees of the wall," and "speed > Z.")
  • Genetic algorithms were used to train the optimal values of X, Y, Z, and so on.
  • After a period of learning, the values were copied into the bot's source, and it was declared "done."

So, while active learning can be computationally expensive mid-game, there's still value to using learning algorithms to create your AI, if you don't think you can make it smart enough.

Another benefit of Genetic Algorithms is, you can define the "correct" outcome as a bot that will win 15% of the time, to train an easy-mode bot.

ojrac
+1  A: 

I would like to make the case that ML can be used in video games. The following shows very forward thinking research into combining neural networks and with evolutionary approaches to create a whole new class of game experience. I had the great pleasure of taking classes from the inventor of this algorithm: NEAT . It's not a perfect solution, but it shows great potential.

http://nerogame.org/

Michael Rosario
A: 

In addition to my answer above, there is a paper on this:

Machine learning in digital games: a survey

Note that most (but not all) of the games listed within are purely academic exercises, as expected.

Kylotan
A: 

Often times there doesn't need to be any real learning in the AI engine (see Kylotan's answer).

However a learning AI algorithm can be relatively easy to design for a very specific & simple task. My idea was to give the computer some direction on the task, but then also code in pattern recognition so it can learn from its mistakes. When breaking AI down into these components it becomes something my feeble mind can comprehend.

You can take Tic Tac Toe as a simple example. I wrote a Tic Tac Toe game with AI a few months ago. I simply gave the computer knowledge of the rules and how to block a winning move -- that's it. You can then set it up to play itself and behind the scenes it's maintaining a list of past moves and recognizes patterns as it goes, becoming "smarter" as it gains experience.

After 10,000 games or so if you then play it yourself it can be hard to beat. The AI in this game could be optimized to learn much faster if I took into consideration reflections and rotations of the board. But it was still a fun working example of a learning AI engine.

From a practical point of view, however, a learning algorithm may not be worth the processing power in a game. After all, the computer does have to maintain a list or some sort of structure to store its learned intelligence. And that means more RAM usage, and potentially some costly lookups. And with a game with a lot of moving pieces this can add up.

Steve Wortham