views:

530

answers:

3

Is there some way to make boost::python control the Python GIL for every interaction with python?

I am writing a project with boost::python. I am trying to write a C++ wrapper for an external library, and control the C++ library with python scripts. I cannot change the external library, only my wrapper program. (I am writing a functional testing application for said external library)

The external library is written in C and uses function pointers and callbacks to do a lot of heavy lifting. Its a messaging system, so when a message comes in, a callback function gets called, for example.

I implemented an observer pattern in my library so that multiple objects could listen to one callback. I have all the major players exported properly and I can control things very well up to a certain point.

The external library creates threads to handle messages, send messages, processing, etc. Some of these callbacks might be called from different processes, and I recently found out that python is not thread safe.

These observers can be defined in python, so I need to be able to call into python and python needs to call into my program at any point.

I setup the object and observer like so

class TestObserver( MyLib.ConnectionObserver ):
    def receivedMsg( self, msg ):
        print("Received a message!")

ob = TestObserver()
cnx = MyLib.Conection()
cnx.attachObserver( ob )

Then I create a source to send to the connection and the receivedMsg function is called.

So a regular source.send('msg') will go into my C++ app, go to the C library, which will send the message, the connection will get it, then call the callback, which goes back into my C++ library and the connection tries to notify all observers, which at this point is the python class here, so it calls that method.

And of course the callback is called from the connection thread, not the main application thread.

Yesterday everything was crashing, I could not send 1 message. Then after digging around in the Cplusplus-sig archives I learned about the GIL and a couple of nifty functions to lock things up.

So my C++ python wrapper for my observer class looks like this now

struct IConnectionObserver_wrapper : Observers::IConnectionObserver, wrapper<Observers::IConnectionObserver>
{
    void receivedMsg( const Message* msg )
    {
        PyGILState_STATE gstate = PyGILState_Ensure();
        if( override receivedMsg_func = this->get_override( "receivedMsg" ) )
            receivedMsg_func( msg );
        Observers::IConnectionObserver::receivedMsg( msg );
        PyGILState_Release( gstate );
    }
}

And that WORKS, however, when I try to send over 250 messages like so

for i in range(250)
    source.send('msg")

it crashes again. With the same message and symptoms that it has before,

PyThreadState_Get: no current thread

so I am thinking that this time I have a problem calling into my C++ app, rather then calling into python.

My question is, is there some way to make boost::python handle the GIL itself for every interaction with python? I can not find anything in the code, and its really hard trying to find where the source.send call enters boost_python :(

+1  A: 

I found a really obscure post on the mailing list that said to use PyEval_InitThreads(); in BOOST_PYTHON_MODULE and that actually seemed to stop the crashes.

Its still a crap shoot whether it the program reports all the messages it got or not. If i send 2000, most of the time it says it got 2000, but sometimes it reports significantly less.

I suspect this might be due to the threads accessing my counter at the same time, so I am answering this question because that is a different problem.

To fix just do

BOOST_PYTHON_MODULE(MyLib)
{
    PyEval_InitThreads();
    class_ stuff
Charles
The documentation is here, fwiw: http://docs.python.org/c-api/init.html#thread-state-and-the-global-interpreter-lock
Jason Orendorff
+3  A: 

Don't know about your problem exactly, but take a look at CallPolicies:

http://www.boost.org/doc/libs/1_37_0/libs/python/doc/v2/CallPolicies.html#CallPolicies-concept

You can define new call policies (one call policy is "return_internal_reference" for instance) that will execute some code before and/or after the wrapped C++ function is executed. I have successfully implemented a call policy to automatically release the GIL before executing a C++ wrapped function and acquiring it again before returning to Python, so I can write code like this:

.def( "long_operation", &long_operation, release_gil<>() );

A call policy might help you in writing this code more easily.

Bruno Oliveira
his is a good idea, certainly more clean then calling ensure release all the time, thanks!
Charles
@Bruno This doesn't work because after the precall it returns to the interpreter to do some type conversions. It then crashes, because the GIL is released. Update your answer with a link to your other question!
e.tadeu
@e.tadeu is right, I found a problem later (apparently) regarding the type conversions made by boost::python after precall was called, leading to interpreter crashes.Another solution, just as easy to use, can be found here (thanks e.tadeu for helping btw):http://stackoverflow.com/questions/2135457/how-to-write-a-wrapper-over-functions-and-member-functions-that-executes-some-cod
Bruno Oliveira
+1  A: 

I think the best approach is to avoid the GIL and ensure your interaction with python is single-threaded.

I'm designing a boost.python based test tool at the moment and think I'll probably use a producer/consumer queue to dispatch events from the multi-threaded libraries which will be read sequentially by the python thread.

James Taylor
I was keeping this as an option if I could not figure things out correctly, but in my opinion at the time writing a thread safe event dispatcher would have been more complicated then just editing boost python, which is what I ended up doing. There is a project called TxFox (or something) which supports multi-threaded python, I took his patches, made a few changes of my own, and now my boost python manages all the GIL stuff itself, leaving the library free to not worry about it.
Charles