(This is not exactly the problem that I have, but it's isomorphic, and I think that this explanation will be easiest for others to understand.)
Suppose that I have a set of points in an n-dimensional space. Using 3 dimensions for example:
A : [1,2,3]
B : [4,5,6]
C : [7,8,9]
I also have a set of vectors that describe possible movements in this space:
V1 : [+1,0,-1]
V2 : [+2,0,0]
Now, given a point dest, I need to find a starting point p and a set of vectors moves that will bring me to dest in the most efficient manner. Efficiency is defined as "fewest number of moves", not necessarily "least linear distance": it's permissible to select a p that's further from dest than other candidates if the move set is such that you can get there in fewer moves. The vectors in moves must be a strict subset of the available vectors; you can't use the same vector more than once unless it appears more than once in the input set.
My input contains ~100 starting points and maybe ~10 vectors, and my number of dimensions is ~20. The starting points and available vectors will be fixed for the lifetime of the app, but I'll be finding paths for many, many different dest points. I want to optimize for speed, not memory. It's acceptable for the algorithm to fail (to find no possible paths to dest).
Update w/ Accepted Solution
I adopted a solution very similar to the one marked below as "accepted". I iterate over all points and vectors and build a list of all reachable points with the routes to reach them. I convert this list into a hash of <dest, p+vectors>, selecting the shortest set of vectors for each destination point. (There is also a little bit of optimization for hash size, which isn't relevant here.) Subsequent dest lookups happen in constant time.