views:

1208

answers:

4

So I knocked up some test code to see how the multiprocessing module would scale on cpu bound work compared to threading. On linux I get the performance increase that I'd expect:

linux (dual quad core xeon):
serialrun took 1192.319 ms
parallelrun took 346.727 ms
threadedrun took 2108.172 ms

My dual core macbook pro shows the same behavior:

osx (dual core macbook pro)
serialrun took 2026.995 ms
parallelrun took 1288.723 ms
threadedrun took 5314.822 ms

I then went and tried it on a windows machine and got some very different results.

windows (i7 920):
serialrun took 1043.000 ms
parallelrun took 3237.000 ms
threadedrun took 2343.000 ms

Why oh why, is the multiprocessing approach so much slower on windows?

Here's the test code:

#!/usr/bin/env python

import multiprocessing
import threading
import time

def print_timing(func):
    def wrapper(*arg):
        t1 = time.time()
        res = func(*arg)
        t2 = time.time()
        print '%s took %0.3f ms' % (func.func_name, (t2-t1)*1000.0)
        return res
    return wrapper


def counter():
    for i in xrange(1000000):
        pass

@print_timing
def serialrun(x):
    for i in xrange(x):
        counter()

@print_timing
def parallelrun(x):
    proclist = []
    for i in xrange(x):
        p = multiprocessing.Process(target=counter)
        proclist.append(p)
        p.start()

    for i in proclist:
        i.join()

@print_timing
def threadedrun(x):
    threadlist = []
    for i in xrange(x):
        t = threading.Thread(target=counter)
        threadlist.append(t)
        t.start()

    for i in threadlist:
        i.join()

def main():
    serialrun(50)
    parallelrun(50)
    threadedrun(50)

if __name__ == '__main__':
    main()
+4  A: 

It's been said that creating processes on Windows is more expensive than on linux. If you search around the site you will find some information. Here's one I found easily.

Duck
+7  A: 

It might be that processes are much lighter weight under UNIX variants. Windows processes are heavyweight and take much more time to start up. Threads are the recommended way of doing multiprocessing on windows.

Byron Whitlock
Oh interesting, then would that mean that a change to the balance of the test, say counting higher but fewer times, would let Windows reclaim some multiprocessing performance? I shall give it a go.
manghole
Tried recalibrating to counting to 10.000.000 and 8 iterations and the results are more in Windows' favor:<pre>serialrun took 1651.000 msparallelrun took 696.000 msthreadedrun took 3665.000 ms</pre>
manghole
+7  A: 

The python documentation for multiprocessing blames the lack of os.fork() for the problems in Windows. It may be applicable here.

See what happens when you import psyco. First, easy_install it:

C:\Users\hughdbrown>\Python26\scripts\easy_install.exe psyco
Searching for psyco
Best match: psyco 1.6
Adding psyco 1.6 to easy-install.pth file

Using c:\python26\lib\site-packages
Processing dependencies for psyco
Finished processing dependencies for psyco

Add this to the top of your python script:

import psyco
psyco.full()

I get these results without:

serialrun took 1191.000 ms
parallelrun took 3738.000 ms
threadedrun took 2728.000 ms

I get these results with:

serialrun took 43.000 ms
parallelrun took 3650.000 ms
threadedrun took 265.000 ms

Parallel is still slow, but the others burn rubber.

Edit: also, try it with the multiprocessing pool. (This is my first time trying this and it is so fast, I figure I must be missing something.)

@print_timing
def parallelpoolrun(reps):
    pool = multiprocessing.Pool(processes=4)
    result = pool.apply_async(counter, (reps,))

Results:

C:\Users\hughdbrown\Documents\python\StackOverflow>python  1289813.py
serialrun took 57.000 ms
parallelrun took 3716.000 ms
parallelpoolrun took 128.000 ms
threadedrun took 58.000 ms
hughdbrown
+1 for the nice optimization.
Byron Whitlock
Very neat!Lowering the number of iterations (processes) while raising the count-to value shows that, as Byron told, that the parrallel slowness comes from the added setup time of Windows Processes.
manghole
The Pool does not seem to wait for itself to complete, there is a join() method for Pool but it doesn't seem to do what I think it should do :P.
manghole
Yeah, I was afraid I had this wrong.
hughdbrown
A: 

Currently, your counter() function is not modifying much state. Try changing counter() so that it modifies many pages of memory. Then run a cpu bound loop. See if there is still a large disparity between linux and windows.

I'm not running python 2.6 right now, so I can't try it myself.

Karl Voigtland