views:

227

answers:

2

Hi,

I have some time occupying myself with motion planning for robots, and have for some time wanted to explore the possibility of improving the opportunities as "potential field" method offers. My challenge is to avoid that the robot gets trapped in "local minimum" when using the "potential field" method. Instead of using a "random walk" approach to avoid that the robot gets trapped I have thought about whether it is possible to implement a variation of A* which could act as a sort of guide for precisely to avoid that the robot gets trapped in "local minimum".

Is there some of the experiences of this kind, or can refer to literature, which avoids local minimum in a more effective way than the one used in the "random walk" approach.

+4  A: 

A* and potential fields are all search strategies. The problem you are experiencing is that some search strategies are more "greedy" than others, and more often than not, algorithms that are too greedy get trapped in local minimum.

There are some alternatives where the tension between greediness (the main cause of getting trapped on local minimum) and diversity (trying new alternatives that don't seem to be a good choice in the short term) are parameterized.

A few years ago I've researched a bit about ant algorithms (search for Marco Dorigo, ACS, ACO) and they have a family of search algorithms that can be applied to pretty much anything, and they can control the greediness vs. exploration of your search space. In one of their papers, they even compared the search performance solving the TSP (the canonical traveling salesman problem) using genetic algorithms, simulated annealing and others. Ant won.

I've solved the TSP in the past using genetic algorithms and I still have the source code in delphi if you like.

Padu Merloti
I would very much like to see your code, and if you have some text about the approach you use in your code - it would be really good so I can get a better understanding of your solution to the TSP.My email is nesmoht"at sign"gmail.dk - thanks ...
nesmoht
+3  A: 

This is a well-studied problem, fortunately. You can find several options for escaping local minima in a good AI textbook or the literature. Keywords to search for include "gradient descent" and "local minima".

One that I like is simulated annealing.

The only way that I know of to avoid local minima completely is the have a global map of the world that you can search efficiently.

amo
If that was possible (the global map is completely known), then the solution would be trivial and you wouldn't need to appeal to non-deterministic algorithms right?
Padu Merloti
@Padu: It depends on how computationally expensive the calculation is. You might have a complete map but need to find the minimum faster than is possible with an exhaustive search.
amo
That's exactly what I'm trying to convey. If the map is small enough that an exhaustive search is actually possible, then the problem of getting trapped is completely eliminated because you are using a deterministic algorithm (exhaustive). <br />In most interesting cases, it is not feasible, so you need some kind of heuristic (best guess) search, where the best alternative is not guaranteed, but a "close enough" solution suffices. That is the case of algorithms such as A*, genetic algorithms, ANN, Ant Colony Systems and etc.
Padu Merloti