views:

690

answers:

4

I'd like to improve the performance of convolution using python, and was hoping for some insight on how to best go about improving performance.

I am currently using scipy to perform the convolution, using code somewhat like the snippet below:

import numpy
import scipy
import scipy.signal
import timeit

a=numpy.array ( [ range(1000000) ] )
a.reshape(1000,1000)
filt=numpy.array( [ [ 1, 1, 1 ], [1, -8, 1], [1,1,1] ] )

def convolve():
  global a, filt
  scipy.signal.convolve2d ( a, filt, mode="same" )

t=timeit.Timer("convolve()", "from __main__ import convolve")
print "%.2f sec/pass" % (10 * t.timeit(number=10)/100)

I am processing image data, using grayscale (integer values between 0 and 255), and I currently get about a quarter of a second per convolution. My thinking was to do one of the following:

Use corepy, preferably with some optimizations Recompile numpy with icc & ikml. Use python-cuda.

I was wondering if anyone had any experience with any of these approaches ( what sort of gain would be typical, and if it is worth the time ), or if anyone is aware of a better library to perform convolution with Numpy.

Thanks!

EDIT:

Speed up of about 10x by re-writing python loop in C over using Numpy.

A: 

A typical optimization for convolution is to use the FFT of your signal. The reason is: the convolution in real space is a product in FFT space. It is often faster to compute the FFT, then the product, and the iFFT of the result than convolve the usual way.

PierreBdR
And do this with cuda, and it will be really extremely fast. If cuda works in the target environment, it's likely to get the most performance... GPUs are very fast indeed. The only way cuda wouldn't win is if the data transfer to the GPU and back starts to dominate the time.
Andrew McGregor
I wish data transfer back and forth between the video card would be the problem! Any suggestions for pre-existing libraries?
Bear
The fourier trick is good for large convolution kernels, but for the example shown, it's only 3x3. The simply way is probably faster - but if the FFT uses CUDA while the simply way doesn't, no telling w/o measuring.
DarenW
+1  A: 

For the particular example 3x3 kernel, I'd observe that

1  1  1
1 -8  1
1  1  1

  1  1  1     0  0  0
= 1  1  1  +  0 -9  0
  1  1  1     0  0  0

and that the first of these is factorable - it can be convoluted by convolving (1 1 1) for each row, and then again for each column. Then subtract nine times the original data. This may or may not be faster, depending on whether the scipy programmers made it smart enough to automatically do this. (I haven't checked in a while.)

You probably want to do more interesting convolutions, where factoring may or may not be possible.

DarenW
+5  A: 

The code in scipy for doing 2d convolutions is a bit messy and unoptimized. See http://svn.scipy.org/svn/scipy/trunk/scipy/signal/firfilter.c if you want a glimpse into how the scipy sausage is made.

If all you want is to process with a small, constant kernel like the one you showed, a function like this might work:

def specialconvolve(a):
    # sorry, you must pad the input yourself
    rowconvol = a[1:-1,:] + a[:-2,:] + a[2:,:]
    colconvol = rowconvol[:,1:-1] + rowconvol[:,:-2] + rowconvol[:,2:] - 9*a[1:-1,1:-1]
    return colconvol

This function takes advantage of the separability of the kernel like DarenW suggested above, as well as taking advantage of the more optimized numpy arithmetic routines. It's over 1000 times faster than the convolve2d function by my measurements.

Theran
Thank you for pointing that out, I hadn't considered that the scipy convolve could be that inefficient. It looks like, though I havn't checked that closely, that scipy convolve is doing quite a bit of memory manipulation operations and has a number of if statements slowing things down. I'll post back the results, and thank you to everyone for your comments.
Bear
Yes, convolve2d is quite inefficient, as it deals with the general case (it deals with arbitrary objects - you should be able to convolve with array of Decimal objects, for example). I think it could be considerably sped up by using special codepaths for the common case (in particular to avoid the function pointer call inside the triple loop, which is very likely to be one of the hostpot.
David Cournapeau
+1  A: 

Before going to say C with ctypes, I'd suggest running a standalone convolve in C, to see where the limit is.
Similarly for CUDA, cython, scipy.weave ...

Added 7feb: convolve33 8-bit data with clipping takes ~ 20 clock cycles per point, 2 clock cycles per mem access, on my mac g4 pcc with gcc 4.2. Your mileage will vary.

A couple of subtleties:

  • do you care about correct clipping to 0..255 ? np.clip() is slow, cython etc. don't know.
  • Numpy/scipy may need memory for temps the size of A (so keep 2*sizeof(A) < cache size).
    If your C code, though, does a running update inplace, that's half the mem but a different algorithm.

By the way, google theano convolve => "A convolution op that should mimic scipy.signal.convolve2d, but faster! In development"

Denis