I'm trying to build an A* solver for a 15-square puzzle.
The goal is to re-arrange the tiles so that they appear in their natural positions. You can only slide one tile at a time. Each possible state of the puzzle is a node in the search graph.
For the h(x) function, I am using an aggregate sum, across all tiles, of the tile's dislocation from the goal state. In the above image, the 5 is at location 0,0, and it belongs at location 1,0, therefore it contributes 1 to the h(x) function. The next tile is the 11, located at 0,1, and belongs at 2,2, therefore it contributes 3 to h(x). And so on. EDIT: I now understand this is what they call "Manhattan distance", or "taxicab distance".
I have been using a step count for g(x). In my implementation, for any node in the state graph, g is just +1 from the prior node's g.
To find successive nodes, I just examine where I can possibly move the "hole" in the puzzle. There are 3 neighbors for the puzzle state (aka node) that is displayed: the hole can move north, west, or east.
My A* search sometimes converges to a solution in 20s, sometimes 180s, and sometimes doesn't converge at all (waited 10 mins or more). I think h is reasonable. I'm wondering if I've modeled g properly. In other words, is it possible that my A* function is reaching a node in the graph via a path that is not the shortest path?
Maybe have I not waited long enough? Maybe 10 minutes is not long enough?
For a fully random arrangement, (assuming no parity problems), What is the average number of permutations an A* solution will examine? (please show the math)
I'm going to look for logic errors in my code, but in the meantime, Any tips?
(ps: it's done in Javascript).
Also, no, this isn't CompSci homework. It's just a personal exploration thing. I'm just trying to learn Javascript.
EDIT: I've found that the run-time is highly depend upon the heuristic. I saw the 10x factor applied to the heuristic from the article someone mentioned, and it made me wonder - why 10x? Why linear? Because this is done in javascript, I could modify the code to dynamically update an html table with the node currently being considered. This allowd me to peek at the algorithm as it was progressing. With a regular taxicab distance heuristic, I watched as it failed to converge.
There were 5's and 12's in the top row, and they kept hanging around. I'd see 1,2,3,4 creep into the top row, but then they'd drop out, and other numbers would move up there. What I was hoping to see was 1,2,3,4 sort of creeping up to the top, and then staying there.
I thought to myself - this is not the way I solve this personally. Doing this manually, I solve the top row, then the 2ne row, then the 3rd and 4th rows sort of concurrently.
So I tweaked the h(x) function to more heavily weight the higher rows and the "lefter" columns. The result was that the A* converged much more quickly. It now runs in 3 minutes instead of "indefinitely". With the "peek" I talked about, I can see the smaller numbers creep up to the higher rows and stay there. Not only does this seem like the right thing, it runs much faster.
I'm in the process of trying a bunch of variations. It seems pretty clear that A* runtime is very sensitive to the heuristic. Currently the best heuristic I've found uses the summation of dislocation * ((4-i) + (4-j))
where i and j are the row and column, and dislocation is the taxicab distance.
One interesting part of the result I got: with a particular heuristic I find a path very quickly, but it is obviously not the shortest path. I think this is because I am weighting the heuristic. In one case I got a path of 178 steps in 10s. My own manual effort produce a solution in 87 moves. (much more than 10s). More investigation warranted.
So the result is I am seeing it converge must faster, and the path is definitely not the shortest. I have to think about this more.
Code:
var stop = false;
function Astar(start, goal, callback) {
// start and goal are nodes in the graph, represented by
// an array of 16 ints. The goal is: [1,2,3,...14,15,0]
// Zero represents the hole.
// callback is a method to call when finished. This runs a long time,
// therefore we need to use setTimeout() to break it up, to avoid
// the browser warning like "Stop running this script?"
// g is the actual distance traveled from initial node to current node.
// h is the heuristic estimate of distance from current to goal.
stop = false;
start.g = start.dontgo = 0;
// calcHeuristic inserts an .h member into the array
calcHeuristicDistance(start);
// start the stack with one element
var closed = []; // set of nodes already evaluated.
var open = [ start ]; // set of nodes to evaluate (start with initial node)
var iteration = function() {
if (open.length==0) {
// no more nodes. Fail.
callback(null);
return;
}
var current = open.shift(); // get highest priority node
// update the browser with a table representation of the
// node being evaluated
$("#solution").html(stateToString(current));
// check solution returns true if current == goal
if (checkSolution(current,goal)) {
// reconstructPath just records the position of the hole
// through each node
var path= reconstructPath(start,current);
callback(path);
return;
}
closed.push(current);
// get the set of neighbors. This is 3 or fewer nodes.
// (nextStates is optimized to NOT turn directly back on itself)
var neighbors = nextStates(current, goal);
for (var i=0; i<neighbors.length; i++) {
var n = neighbors[i];
// skip this one if we've already visited it
if (closed.containsNode(n)) continue;
// .g, .h, and .previous get assigned implicitly when
// calculating neighbors. n.g is nothing more than
// current.g+1 ;
// add to the open list
if (!open.containsNode(n)) {
// slot into the list, in priority order (minimum f first)
open.priorityPush(n);
n.previous = current;
}
}
if (stop) {
callback(null);
return;
}
setTimeout(iteration, 1);
};
// kick off the first iteration
iteration();
return null;
}