views:

212

answers:

4

I'm considering using a neural network to power my enemies in a space shooter game i'm building and i'm wondering; how do you train neural networks when there is no one definitive good set of outputs for the network?

+1  A: 

You can check out AI Dynamic game difficulty balancing for various AI techniques and references.

(IMO, you can implement enemy behaviors, like "surround the enemy", which will be really cool, without delving into advanced AI concepts)

Edit: since you're making a space shooter game and you want some kind of AI for your enemies, I believe you'll find interesting this link: Steering Behaviors For Autonomous Characters

Nick D
This is interesting. It looks like i could at least engineer a game to offer dynamic difficulty with a neural net figuring out when the players are having fun.
RCIX
+4  A: 

I'm studying neural networks at the moment, and they seem quite useless without well defined input and output encodings, and they don't scale at all to complexity (see http://en.wikipedia.org/wiki/VC%5Fdimension). that's why neural network research has had so little application since the initial hype more than 20-30 years ago while semantic/state based AI took over everyone's interests because of it's success in real world applications.

  • A so a good place to start might be to figure out how to numerically represent the state of the game as inputs for the neural net.
  • The next thing would be to figure out what kind of output would correspond to actions in the game.
  • think about the structure of neural network to use. To get interesting complex behavior from neural networks, the network almost has to be recurrent. You'll need a recurrent network because they have 'memory', but beyond that you don't have much else to go on. However, recurrent networks with any complex structure is really hard to train to behave.
  • The areas where neural networks have been successful tend to be classification (image, audio, grammar, etc) and limited success in statistical prediction (what word would we expect to come after this word, what will the stock price be tomorrow?)

In short, it's probably better for you to use Neural nets for a small portion of the game rather as the core enemy AI.

Charles Ma
I think i can use basic inputs such as health, current speed, andplayer direction/distance for inputs and then have a shooting direction, currently shooting value and either a thrust vector or turn speed an accel/decel value. Also, can you provide a link to more informaton about "recurrent" neural networks? 'm not familiar with them.
RCIX
A recurrent network just means that outputs are fed back into neurons as inputs. There are lots of different types of neural nets with different behavior. some simple ones are elman networks http://wiki.tcl.tk/15206and hopfield networkshttp://en.wikipedia.org/wiki/Hopfield_networkThere's not much general information available about how they work and what they're good for, so you're better off searching through university lecture notes and google scholar for papers. Again, the reason is that most of this research hasn't left academia because it's so hard to use them to solve real problems.
Charles Ma
A: 

Have you considered that it's easily possible to modify an FSM in response to stimulus? It is just a table of numbers after all, you can hold it in memory somewhere and change the numbers as you go. I wrote about it a bit in one of my blog fuelled deleriums, and it oddly got picked up by some Game AI news site. Then the guy who built a Ms. Pacman AI that could beat humans and got on the real news left a comment on my blog with a link to even more useful information

here's my blog post with my incoherant ramblings about some idea I had about using markov chains to continually adapt to a game environment, and perhaps overlay and combine something that the computer has learned about how the player reacts to game situations.

http://bustingseams.blogspot.com/2008/03/funny-obsessive-ideas.html

and here's the link to the awesome resource about reinforcement learning that mr. smarty mcpacman posted for me.

http://www.cs.ualberta.ca/%7Esutton/book/ebook/the-book.html

here's another cool link

http://aigamedev.com/open/architecture/online-adaptation-game-opponent/

These are not neural net approaches, but they do adapt and continually learn, and are probably better suited to games than neural networks.

Breton
Interesting approach and i'll check out those links
RCIX
A: 

I'll refer you to two of Matthew Buckland's books.

The second book goes into back-propagation ANN, which is what most people mean when they talk about NN anyway.

That said, I think the first book is more useful if you want to create meaningful game AI. There's a nice, meaty section on using FSM successfully (and yes, it's easy to trip yourself up with a FSM).

hythlodayr