views:

439

answers:

6

I have an array of points in unknown dimensional space, such as:

data=numpy.array(
[[ 115, 241, 314],
[ 153, 413, 144],
[ 535, 2986, 41445]])

and I would like to find the average euclidean distance between all points.

Please note that I have over 20,000 points, so I would like to do this as efficiently as possible.

Thanks.

+2  A: 

Well, I don't think that there is a super fast way to do this, but this should do it:

tot = 0.

for i in xrange(data.shape[0]-1):
    tot += ((((data[i+1:]-data[i])**2).sum(1))**.5).sum()

avg = tot/((data.shape[0]-1)*(data.shape[0])/2.)
Justin Peel
This takes ~35 seconds to run on a 1.86 GHz Win32 machine. If that's ok for your application, I'd say go with it. @Justin - a couple of small bugs: you should have `tot += (...).mean()`, and `avg = tot/(data.shape[0]-1)`.
mtrw
@mtrw I agree that there was a bug - I was dividing the total by the wrong number, but now I've fixed it.
Justin Peel
+3  A: 

There's no getting around the number of evaluations:

Sum[n-i, {i, 0, n}] =

But you can save yourself the expense of all those square roots if you can get by with an approximate result. It depends on your needs.

If you're going to calculate an average, I would advise you to not try putting all the values into an array before calculating. Just calculate the sum (and sum of squares if you need standard deviation as well) and throw away each value as you calculate it.

Since alt text and alt text, I don't know if this means you have to multiply by two somewhere.

duffymo
+1 For the proof. Another option is to parallelize.
Michael Aaron Safyan
+3  A: 

If you have access to scipy, you could try the following:

scipy.spatial.distance.cdist(data,data)

Nick_V
+2  A: 

Now that you've stated your goal of finding the outliers, you are probably better off computing the sample mean and, with that, the sample variance, since both those operations will give you an O(nd) operation. With that, you should be able to find outliers (e.g. excluding points further from the mean than some fraction of the std. dev.), and that filtering process should be possible to perform in O(nd) time for a total of O(nd).

You might be interested in a refresher on Chebyshev's inequality.

Michael Aaron Safyan
A: 

If you want a fast and inexact solution, you could probably adapt the Fast Multipole Method algorithm.

Points that are separated by a small distance have a smaller contribution to the final average distance, so it would make sense to group points into clusters and compare the clusters distances.

Luper Rouch
+2  A: 

Is it ever worthwhile to optimize without a working solution? Also, computation of a distance matrix over the entire data set rarely needs to be fast because you only do it once--when you need to know a distance between two points, you just look it up, it's already calculated.

So if you don't have a place to start, here's one. If you want to do this in Numpy without the need to write any inline fortran or C, that should be no problem, though perhaps you want to include this small vector-based virtual machine called "numexpr" (available on PyPI, trivial to intall) which in this case gave a 5x performance boost versus Numpy alone.

Below i've calculated a distance matrix for 10,000 points in 2D space (a 10K x 10k matrix giving the distance between all 10k points). This took 59 seconds on my MBP.

import numpy as NP
import numexpr as NE

# data are points in 2D space (x, y)--obviously, this code can accept data of any dimension
x = NP.random.randint(0, 10, 10000)
y = NP.random.randint(0, 10, 10000)
fnx = lambda q : q - NP.reshape(q, (len(q), 1))
delX = fnx(x)
delY = fnx(y)
dist_mat = NE.evaluate("(delX**2 + delY**2)**0.5")
doug