views:

933

answers:

2

The Question: What is the best way to calculate inverse distance weighted (IDW) interpolation in Python, for point locations?

Some Background: Currently I'm using RPy2 to interface with R and it's gstat module. Unfortunately, the gstat module conflicts with arcgisscripting which I got around by running RPy2 based analysis in a separate process. Even if this issue is resolved in a recent/future release, and efficiency can be improved, I'd still like to remove my dependency on installing R.

The gstat website does provide a stand alone executable, which is easier to package with my python script, but I still hope for a Python solution which doesn't require multiple writes to disk and launching external processes. The number of calls to the interpolation function, of separate sets of points and values, can approach 20,000 in the processing I'm performing.

I specifically need to interpolate for points, so using the IDW function in ArcGIS to generate rasters sounds even worse than using R, in terms of performance.....unless there is a way to efficiently mask out only the points I need. Even with this modification, I wouldn't expect performance to be all that great. I will look into this option as another alternative. UPDATE: The problem here is you are tied to the cell size you are using. If you reduce the cell-size to get better accuracy, processing takes a long time. You also need to follow up by extracting by points.....over all an ugly method if you want values for specific points.

I have looked at the scipy documentation, but it doesn't look like there is a straight forward way to calculate IDW.

I'm thinking of rolling my own implementation, possibly using some of the scipy functionality to locate the closest points and calculate distances.

Am I missing something obvious? Is there a python module I haven't seen that does exactly what I want? Is creating my own implementation with the aid of scipy a wise choice?

+2  A: 

Edit: @Denis is right, a linear Rbf (e.g. scipy.interpolate.Rbf with "function='linear'") isn't the same as IDW...

(Note, all of these will use excessive amounts of memory if you're using a large number of points!)

Here's a simple exampe of IDW:

def simple_idw(x, y, z, xi, yi):
    dist = distance_matrix(x,y, xi,yi)

    # In IDW, weights are 1 / distance
    weights = 1.0 / dist

    # Make weights sum to one
    weights /= weights.sum(axis=0)

    # Multiply the weights for each interpolated point by all observed Z-values
    zi = np.dot(weights.T, z)
    return zi

Whereas, here's what a linear Rbf would be:

def linear_rbf(x, y, z, xi, yi):
    dist = distance_matrix(x,y, xi,yi)

    # Mutual pariwise distances between observations
    internal_dist = distance_matrix(x,y, x,y)

    # Now solve for the weights such that mistfit at the observations is minimized
    weights = np.linalg.solve(internal_dist, z)

    # Multiply the weights for each interpolated point by the distances
    zi =  np.dot(dist.T, weights)
    return zi

(Using the distance_matrix function here:)

def distance_matrix(x0, y0, x1, y1):
    obs = np.vstack((x0, y0)).T
    interp = np.vstack((x1, y1)).T

    # Make a distance matrix between pairwise observations
    # Note: from <http://stackoverflow.com/questions/1871536&gt;
    # (Yay for ufuncs!)
    d0 = np.subtract.outer(obs[:,0], interp[:,0])
    d1 = np.subtract.outer(obs[:,1], interp[:,1])

    return np.hypot(d0, d1)

Putting it all together into a nice copy-paste example yields some quick comparison plots: Homemade IDW example plotHomemade linear RBF example plotScipy's linear RBF example plot

import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import Rbf

def main():
    # Setup: Generate data...
    n = 10
    nx, ny = 50, 50
    x, y, z = map(np.random.random, [n, n, n])
    xi = np.linspace(x.min(), x.max(), nx)
    yi = np.linspace(y.min(), y.max(), ny)
    xi, yi = np.meshgrid(xi, yi)
    xi, yi = xi.flatten(), yi.flatten()

    # Calculate IDW
    grid1 = simple_idw(x,y,z,xi,yi)
    grid1 = grid1.reshape((ny, nx))

    # Calculate scipy's RBF
    grid2 = scipy_idw(x,y,z,xi,yi)
    grid2 = grid2.reshape((ny, nx))

    grid3 = linear_rbf(x,y,z,xi,yi)
    print grid3.shape
    grid3 = grid3.reshape((ny, nx))


    # Comparisons...
    plot(x,y,z,grid1)
    plt.title('Homemade IDW')

    plot(x,y,z,grid2)
    plt.title("Scipy's Rbf with function=linear")

    plot(x,y,z,grid3)
    plt.title('Homemade linear Rbf')

    plt.show()

def simple_idw(x, y, z, xi, yi):
    dist = distance_matrix(x,y, xi,yi)

    # In IDW, weights are 1 / distance
    weights = 1.0 / dist

    # Make weights sum to one
    weights /= weights.sum(axis=0)

    # Multiply the weights for each interpolated point by all observed Z-values
    zi = np.dot(weights.T, z)
    return zi

def linear_rbf(x, y, z, xi, yi):
    dist = distance_matrix(x,y, xi,yi)

    # Mutual pariwise distances between observations
    internal_dist = distance_matrix(x,y, x,y)

    # Now solve for the weights such that mistfit at the observations is minimized
    weights = np.linalg.solve(internal_dist, z)

    # Multiply the weights for each interpolated point by the distances
    zi =  np.dot(dist.T, weights)
    return zi


def scipy_idw(x, y, z, xi, yi):
    interp = Rbf(x, y, z, function='linear')
    return interp(xi, yi)

def distance_matrix(x0, y0, x1, y1):
    obs = np.vstack((x0, y0)).T
    interp = np.vstack((x1, y1)).T

    # Make a distance matrix between pairwise observations
    # Note: from <http://stackoverflow.com/questions/1871536&gt;
    # (Yay for ufuncs!)
    d0 = np.subtract.outer(obs[:,0], interp[:,0])
    d1 = np.subtract.outer(obs[:,1], interp[:,1])

    return np.hypot(d0, d1)


def plot(x,y,z,grid):
    plt.figure()
    plt.imshow(grid, extent=(x.min(), x.max(), y.max(), y.min()))
    plt.hold(True)
    plt.scatter(x,y,c=z)
    plt.colorbar()

if __name__ == '__main__':
    main()
Joe Kington
Thanks! I will definitely give this a try.
majgis
function="linear" is r, not 1/r. (Does it matter ? There are half-a-dozen choices of function=, wildly different -- some work well for some data.)
Denis
@Denis: It uses 1/function() to weight things. The docs could be clearer in that regard, but it wouldn't make sense any other way. Otherwise, points farther away from the interpolated point would be weighted higher, and the interpolated values near a particular point would have a value closest to the points farthest away.The choice of function matters (a lot!), and IDW is usually the wrong choice. However, the goal is to produce results that seem reasonable to the person doing the interpolation, so if IDW works, it works. Scipy's Rbf doesn't give you much flexibility, regardless.
Joe Kington
@joe, using http://en.wikipedia.org/wiki/Radial_basis_functionfor notation: phi(r) Gaussian and inverse-multiquadratic decrease with distance, the others increase ?! Rbf solves Aw = z to be exact at the cj (the w coefs can vary a lot, print rbf.A rbf.nodes); then for phi(r) = r, y(x) = sum( wj |x - cj| ), strange. Will post (mho of) inverse distance weighting in a bit, then we can compare
Denis
@Joe, nice comparison and plots, ++. How about our doing a separatenote someplace on "advantages and disadvantages of RBFs" ?
Denis
@Denis, Thanks! One of these days I will, but it will have to wait until after prelims. (Grad school is a bit of a time sink!) Incidentally, I believe it was you who e-mailed me as to an "interpolation rant" in regards to a discussion on the scipy mailing list awhile back. Sorry I never replied! Likewise, one of these days I'll write up a blog post or something similar...
Joe Kington
+5  A: 

changed 20 Oct: this class Invdisttree combines inverse-distance weighting and scipy.spatial.KDTree.
Forget the original brute-force answer; this is imho the method of choice for scattered-data interpolation.

""" invdisttree.py: inverse-distance-weighted interpolation using KDTree
    fast, solid, local
"""
from __future__ import division
import numpy as np
from scipy.spatial import cKDTree as KDTree
    # http://docs.scipy.org/doc/scipy/reference/spatial.html

__date__ = "2010-10-20 Oct"  # weights

#...............................................................................
class Invdisttree:
    """ inverse-distance-weighted interpolation using KDTree:
invdisttree = Invdisttree( X, z )  -- points, values
interpol = invdisttree( q, nnear=6, eps=0, p=1, weights=None, stat=0 )
    interpolates z from the 6 points nearest each query point q;
    q may be one point, or a batch of points
    eps: approximate nearest, dist <= (1 + eps) * true nearest
    p: weights ~ 1 / distance**p; p=2 falls off faster
    weights: optional multipliers for 1 / distance**p, of the same shape as q
    stat: accumulate wsum, wn for average weights

How many nearest neighbors should one take ?
a) start with 8 11 14 .. 28 in 2d 3d 4d .. 10d; see Wendel's formula
b) make 3 runs with nnear= e.g. 6 8 10, and look at the results --
    |interpol 6 - interpol 8| etc., or |f - interpol*| if you have f(q).
    I find that runtimes don't increase much at all with nnear -- ymmv.

If different X coordinates measure different things, Euclidean distance
can be way off, unreasonable.  For example, if X0 is in the range 0 to 1
but X1 0 to 1000, the X1 distances will swamp X0; rescale.

A nice property of IDW is that it's scale-free around query points:
if I have values f1 f2 f3 from 3 people at distances d1 d2 d3,
the IDW average
    (f1/d1 + f2/d2 + f3/d3)
    / (1/d1 + 1/d2 + 1/d3)
is the same for distances 1 2 3, or 10 20 30 -- only the ratios matter.
In contrast, the commonly-used kernel exp( - (distance/h)**2 )
is exceedingly sensitive to distance and to h.

    """
# anykernel( dj / av dj ) is also scale-free
# error analysis, |f(x) - idw(x)| ? todo: regular grid, nnear ndim+1, 2*ndim

    def __init__( self, X, z, leafsize=10, stat=0 ):
        assert len(X) == len(z), "len(X) %d != len(z) %d" % (len(X), len(z))
        self.tree = KDTree( X, leafsize=leafsize )  # build the tree
        self.z = z
        self.stat = stat
        self.wn = 0
        self.wsum = None;

    def __call__( self, q, nnear=6, eps=0, p=1, weights=None ):
            # nnear nearest neighbours of each query point --
        q = np.asarray(q)
        qdim = q.ndim
        if qdim == 1:
            q = np.array([q])
        if self.wsum is None:
            self.wsum = np.zeros(nnear)

        self.distances, self.ix = self.tree.query( q, k=nnear, eps=eps )
        interpol = np.zeros( (len(self.distances),) + np.shape(self.z[0]) )
        jinterpol = 0
        for dist, ix in zip( self.distances, self.ix ):
            if nnear == 1:
                wz = self.z[ix]
            elif dist[0] < 1e-10:
                wz = self.z[ix[0]]
            else:  # weight z s by 1/dist --
                w = 1 / dist**p
                if weights is not None:
                    w *= weights[ix]  # >= 0
                w /= np.sum(w)
                wz = np.dot( w, self.z[ix] )
                if self.stat:
                    self.wn += 1
                    self.wsum += w
            interpol[jinterpol] = wz
            jinterpol += 1
        return interpol if qdim > 1  else interpol[0]

#...............................................................................
if __name__ == "__main__":
    import sys

    N = 10000
    Ndim = 2
    Nask = N  # N Nask 1e5: 24 sec 2d, 27 sec 3d on mac g4 ppc
    Nnear = 8  # 8 2d, 11 3d => 5 % chance one-sided -- Wendel, mathoverflow.com
    leafsize = 10
    eps = .1  # approximate nearest, dist <= (1 + eps) * true nearest
    p = 1  # weights ~ 1 / distance**p, so p=2 falls off faster
    cycle = .25
    seed = 1

    exec "\n".join( sys.argv[1:] )  # python this.py N= ...
    np.random.seed(seed )
    np.set_printoptions( 3, threshold=100, suppress=True )  # .3f

    print "\nInvdisttree:  N %d  Ndim %d  Nask %d  Nnear %d  leafsize %d  eps %.2g  p %.2g" % (
        N, Ndim, Nask, Nnear, leafsize, eps, p)

    def terrain(x):
        """ ~ rolling hills """
        return np.sin( (2*np.pi / cycle) * np.sum( x, axis=-1 ) / Ndim )

    known = np.random.uniform( size=(N,Ndim) ) ** 2  # a bit clumpy
    z = terrain( known )
    ask = np.random.uniform( size=(Nask,Ndim) )

#...............................................................................
    invdisttree = Invdisttree( known, z, leafsize=leafsize, stat=1 )
    weights = None
    for jweights in (0, 1):
        print "" if jweights == 0  else "\nwith weights:"

        interpol = invdisttree( ask, nnear=Nnear, eps=eps, p=p, weights=weights )

        print "average distances to nearest points: %s" % \
            np.mean( invdisttree.distances, axis=0 )
        print "average weights: %s" % (invdisttree.wsum / invdisttree.wn)
            # see Wikipedia Zipf's law
        err = np.abs( terrain(ask) - interpol )
        print "average |terrain() - interpolated|: %.2g" % np.mean(err)
        weights = err

    # print "interpolate a single point: %.2g" % \
    #     invdisttree( known[0], nnear=Nnear, eps=eps )
Denis
Denis, Earlier you asked how many points I had....at most I'll have a couple thousand source points, so not enough to worry about. This is very helpful, thank you!
majgis
@majgis, you're welcome. N=100000 Nask=100000 take ~ 24 sec 2d, 27 sec 3d, on my old mac g4 ppc. (For interpolating 2d data to a uniform grid, matplotlib.delaunay is ~ 10 times faster -- see http://www.scipy.org/Cookbook/Matplotlib/Gridding_irregularly_spaced_data )
Denis