views:

255

answers:

4

Hi, I have a number of C functions, and I would like to call them from python. cython seems to be the way to go, but I can't really find an example of how exactly this is done. My C function looks like this:

void calculate_daily ( char *db_name, int grid_id, int year,
                       double *dtmp, double *dtmn, double *dtmx, 
                       double *dprec, double *ddtr, double *dayl, 
                       double *dpet, double *dpar ) ;

All I want to do is to specify the first three parameters (a string and two integers), and recover 8 numpy arrays (or python lists. All the double arrays have N elements). My code assumes that the pointers are pointing to an already allocated chunk of memory. Also, the produced C code ought to link to some external libraries.

A: 

You should check out Ctypes it's probably the most easiest thing to use if all you want is one function.

Yon
True, but I'd like to wrap other stuff using cython later, so this is my starting point :)
Jose
A: 

Consider subscribing to the cython-users list at Google-Groups. That's the best place to ask this kind of question.

Stefan Behnel
That's a good suggestion :)
Jose
+1  A: 

Basically you can write your Cython function such that it allocates the arrays (make sure you cimport numpy as np):

cdef np.ndarray[np.double_t, ndim=1] rr = np.zeros((N,), dtype=np.double)

then pass in the .data pointer of each to your C function. That should work. If you don't need to start with zeros you could use np.empty for a small speed boost.

See the Cython for NumPy Users tutorial in the docs (fixed it to the correct link).

dwf
+3  A: 

Here's a tiny but complete example of passing numpy arrays to an external C function, logically

fc( int N, double* a, double* b, double* z )  # z = a + b

using cython. (This is surely well-known to those who know it well. Comments are welcome.)

First read or skim Cython build and Cython with NumPy .

4 steps:

  • cython f.pyx -> f.c
  • compile as usual, e.g. g++ -c -O2 -Wall fc.cpp -> fc.o
  • link: python f-setup.py build_ext --inplace -> f.so, a dynamic library
  • python test-f.py: import f loads f.so, f.fpy( ... ) is a wrapper for the C fc( ... )

For students, I'd suggest: make a diagram of these steps, look through the files below, then download and run them.

python f-setup.py uses distutils to compile f.c, then link f.o fc.o to f.so, a dynamic lib that python import f will load. (distutils is a huge, convoluted package used to make Python packages for distribution, and install them. Here we're using just a small part of it to compile and link to f.so.) This step has nothing to do with cython, but it can be confusing; simple mistakes in a .pyx can cause pages of obscure error messages from g++ compile and link.

Like make, setup.py will rerun the first 2 steps, cython f.pyx and g++ -c ... f.c, if f.pyx is newer than f.c. To cleanup, rm -r build/ .

See also distutils doc and/or SO questions on distutils .

(An alternative to setup.py would be to modify the cc-lib-mac wrapper below for your platform and installation: not pretty either, but small.)

Unfortunately, cython 0.12.1 maps numpyarray.data to char* not double*, so fc.cpp has to cast them to double* (or int* or ...) .

For real examples of cython wrapping C, look at .pyx files in just about any SciKit .

Added 21 june: see also wrap-c-lib-with-cython and wrapping-a-c-library-in-python-c-cython-or-ctypes


To unpack the following files, cut-paste the lot to one big file, say cython-numpy-c-demo, then in Unix (in a clean new directory) run sh cython-numpy-c-demo.

#--------------------------------------------------------------------------------
cat >f.pyx <<\!
# f.pyx: numpy arrays -> extern from "fc.h"
# 3 steps:
# cython f.pyx  -> f.c
# link: python f-setup.py build_ext --inplace  -> f.so, a dynamic library
# py test-f.py: import f gets f.so, f.fpy below calls fc()

import numpy as np
cimport numpy as np

cdef extern from "fc.h": 
    int fc( int N, char* a, char* b, char* z )  # z = a + b
    # not double*:

def fpy( N, np.ndarray[np.double_t] A, np.ndarray[np.double_t] B, np.ndarray[np.double_t] Z ):
    """ wrap np arrays to fc( a.data ... ) """
    assert N <= len(A) == len(B) == len(Z)
    fcret = fc( N, A.data, B.data, Z.data )
    return fcret

!

#--------------------------------------------------------------------------------
cat >fc.h <<\!
// fc.h: numpy arrays from cython , char* not double*
int fc( int N, const char* a, const char* b, char* z );
!

#--------------------------------------------------------------------------------
cat >fc.cpp <<\!
// fc.cpp: z = a + b, numpy arrays from cython

#include "fc.h"  // char* for cython
#include <stdio.h>


int fcreal( int N, const double a[], const double b[], double z[] )
{
    printf( "fc: N=%d a[0]=%f b[0]=%f \n", N, a[0], b[0] );
    for( int j = 0;  j < N;  j ++ ){
        z[j] = a[j] + b[j];
    }
    return N;
}

int fc( int N, const char* a, const char* b, char* z )  // cython 0.12.1 char*
{
    return fcreal( N, (double*) a, (double*) b, (double*) z );
}
!

#--------------------------------------------------------------------------------
cat >f-setup.py <<\!
# python f-setup.py build_ext --inplace
#   f.pyx fc.o -> f.so

# http://docs.python.org/distutils/introduction.html
# http://docs.python.org/distutils/apiref.html  20 pages ...
# http://stackoverflow.com/questions/tagged/distutils+python

import numpy
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext

ext_modules = [Extension(
    name="f",
    sources=["f.pyx"],
    extra_objects=["fc.o"],
        # libraries=
    include_dirs = [numpy.get_include()],  # .../site-packages/numpy/core/include
    language="c++",  # grr, mac g++ -dynamic != gcc -dynamic
    )]

setup(
    name = 'f',
    cmdclass = {'build_ext': build_ext},
    ext_modules = ext_modules
    )

# test: import f
!

#--------------------------------------------------------------------------------
cat >test-f.py <<\!
#!/usr/bin/env python
# test-f.py

import numpy as np
import f  # loads f.so from cc-lib: f.pyx -> f.c + fc.o -> f.so

N = 3
a = np.arange( N, dtype=np.float64 )
b = np.arange( N, dtype=np.float64 )
z = np.ones( N, dtype=np.float64 ) * np.NaN

fret = f.fpy( N, a, b, z )
print "fpy -> fc z:", z

!

#--------------------------------------------------------------------------------
cat >cc-lib-mac <<\!
#!/bin/sh
me=${0##*/}
case $1 in
"" )
    set --  f.c fc.cpp ;;  # default: g++ these
-h* | --h* )
    echo "
$me [g++ flags] xx.c yy.cpp zz.o ...
    compiles .c .cpp .o files to a dynamic lib xx.so
"
    exit 1
esac

# Logically this is simple, compile and link,
# but platform-dependent, layers upon layers, gloom, doom

base=${1%.c*}
base=${base%.o}
set -x

g++ -dynamic -arch ppc \
    -bundle -undefined dynamic_lookup \
    -fno-strict-aliasing -fPIC -fno-common -DNDEBUG `# -g` -fwrapv \
    -isysroot /Developer/SDKs/MacOSX10.4u.sdk \
    -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 \
    -I${Pysite?}/numpy/core/include \
    -O2 -Wall \
    "$@" \
    -o $base.so

# undefs: nm -gpv $base.so | egrep '^ *U _+[^P]'
!

# 21 Jun 2010 18:40
Denis