views:

317

answers:

2

This is more a style question. For CPU bound processes that really benefit for having multiple cores, do you typically use the multiprocessing module or use threads with an interpreter that doesn't have the GIL? I've used the multiprocessing library only lightly, but also have no experience with anything besides CPython. I'm curious what the preferred approach is and if it is to use a different interpreter, which one.

+1  A: 

Take a look at Parallel Python (www.parallelpython.com) -- I've used to to nicely split up work among the processors on my quad-core box. It even supports clusters!

scrible
Ideally you would do some benchmarking to determine how many Python processes per CPU can be handled without degrading performance. I would start with three Python processes per core and work upwards.
Michael Dillon
+3  A: 

I don't really see a "style" argument to be made here, either way -- both multiprocessing in CPython 2.6, and threading in (e.g.) the current versions of Jython and IronPython, let you code in extremely similar ways (and styles;-). So, I'd choose on the basis of very "hard-nosed" considerations -- what is performance like with each choice (if I'm so CPU-bound as to benefit from multiple cores, then performance is obviously of paramount importance), could I use with serious benefit any library that's CPython-only (like numpy) or maybe something else that's JVM- or .NET- only, and so forth.

Alex Martelli