views:

332

answers:

4

Hey there,

I know my question seems pretty vague, but I can't think of a better way to put it, so I'll start off by explaining what I'm trying to do.

I'm currently working on a project whereby I've been given a map and I'm coding a 'Critter' that should be able to navigate its way around the map; the critter has various other functions, but those are not relevant to the current question. The whole program and solution is being written in C#.

I can control the speed of the critter, and retrieve its current location on the map by returning its current X and Y position, I can also set its direction when it collides with the terrain that blocks it.

The only problem I have is that I can't think of a way to intelligently navigate my way around the map; so far I've been basing it on what direction the critter is facing when it collides with the terrain, and this is in no way a good way of moving around the map!

I'm not a games programmer, and this is for a software assignment, so I have no clue on AI techniques.

Here's a link to an image of what the maps and critters look like:

Map and Critter image

I'm in no way looking for anyone to give me a full solution, just a push in the general direction on map navigation.

Thanks in advance!

A: 

I would use a goal-oriented approach. Your question states the goal is than explore the map and avoid obstacles, so that's what we make our goal. But how do we explore the whole map? We explore what is unexplored.

From the outset you have only one unexplored area, the square you are on. The rest of the map is marked as unexplored. You choose an unexplored location, and make it your goal to explore it. But how do you get there? You create a subgoal to explore the location next to it. And how do you do that - explore the square next to that, and so on, until your original goal is broken down into a sequence of explorations, starting from your current square and navigating to the target square.

As you hit obstacles and discover features of the map, some of the subgoals may need to be changed. E.g. when you hit a wall, the subgoal to explore that square has to be scrubbed and you create a new plan to find an alternate route. This is known as backtracking.

That's basically it for the high level description. I hope it helps!

mdma
I wouldn't do exactly this--you want to pick a target nearby. I would make the goal to explore the closest unexplored square.
Loren Pechtel
+1  A: 

A* Search

Take a look at the A* pathfinding algorithm. It's essentially the standard approach for stuff like this.

Amit Patel's write up on pathfinding for games has a pretty good introduction to A* as well as popular variants of the algorithm.

You'll find a C# implementation here, and here

Dynamic A*

Let's say the terrain you'll be searching is not known ahead of time, but rather is discovered as the agent explores its environment. If your agent comes across a previously unknown obstacle, you could just update the agent's map of the terrain and then re-run A* to find a new path to the goal that routes around the obstruction.

While a workable solution, rerunning the planning algorithm from scratch every time you find a new obstacle results in a sizable amount of redundant computation. For example, once you're around the obstacle, it might be that the most efficient route to the goal follows the one you were planning on taking before you discovered the obstacle. By just rerunning A*, you'll need to recompute this section of the previous path.

You can avoid this by using Dynamic A* (D*). Since it keeps track of previously computed paths, when the agent finds a new obstacle, the system only needs to compute new routes in the area around the obstacle. After that, it can just reuse existing paths.

dmcer
+3  A: 

If the only knowledge of the environment that you have is the position of your critter and its velocity the best you can do is a wall following algorithm I think. If you can detect some of the other things in your environment you have many more options.

Some of the more popular algorithm types are...

Potential Fields is a fancy way of saying every obstacle or wall has a "repulsive force" while every goal has an "attractive force". The strength of the force is based on the distance from the object and the "severity" of the object. (A pit of lava is much more severe to travel through than a bumpy road) After constructing the force fields the naive algorithm boils down to following the path of least resistance. Better versions can detect local minima and maxima and escape those wells.

    Critter
    -----\    /-------\
          \  /         \ 
           \/           \
   Local Minima Trap     \
                          \
                           \
                             Goal
Tansir1
A: 

plz can u post me the solution code?? Im stucked and i have to submit it on 7th may.Plz help...

dana