views:

769

answers:

4

I have a very large (read only) array of data that I want to be processed by multiple processes in parallel.

I like the Pool.map function and would like to use it to calculate functions on that data in parallel.

I saw that one can use the Value or Array class to use shared memory data between processes. But when I try to use this I get a RuntimeError: 'SynchronizedString objects should only be shared between processes through inheritance when using the Pool.map function:

Here is a simplified example of what I am trying to do:

from sys import stdin
from multiprocessing import Pool, Array

def count_it( arr, key ):
  count = 0
  for c in arr:
    if c == key:
      count += 1
  return count

if __name__ == '__main__':
  testData = "abcabcs bsdfsdf gdfg dffdgdfg sdfsdfsd sdfdsfsdf"
  # want to share it using shared memory
  toShare = Array('c', testData)

  # this works
  print count_it( toShare, "a" )

  pool = Pool()

  # RuntimeError here
  print pool.map( count_it, [(toShare,key) for key in ["a", "b", "s", "d"]] )

Can anyone tell me what I am doing wrong here?

So what i would like to do is pass info about a newly created shared memory allocated array to the processes after they have been created in the process pool.

A: 

If the data is read only just make it a variable in a module before the fork from Pool. Then all the child processes should be able to access it, and it won't be copied provided you don't write to it.

import myglobals # anything (empty .py file)
myglobals.data = []

def count_it( key ):
    count = 0
    for c in myglobals.data:
        if c == key:
            count += 1
    return count

if __name__ == '__main__':
myglobals.data = "abcabcs bsdfsdf gdfg dffdgdfg sdfsdfsd sdfdsfsdf"

pool = Pool()
print pool.map( count_it, ["a", "b", "s", "d"] )

If you do want to try to use Array though you could try with the lock=False keyword argument (it is true by default).

thrope
I do not believe the use of globals is safe and would certainly not work on windows where the processes are not forked.
James Dean
How is it not safe? If you only need read access to the data it is fine. If you write to it by mistake, then the modified page will be copied-on-write for the child process so nothing bad will happen (wouldn't interfere with other processes for example). You're right it won't work on windows though...
thrope
You are right that it is safe on fork based platforms. But I would like to know if there is a shared memory based way to share large amounts of data after the process pool is created.
James Dean
+2  A: 

The problem I see is that Pool doesn't support pickling shared data through its argument list. That's what the error message means by "objects should only be shared between processes through inheritance". The shared data needs to be inherited, i.e., global if you want to share it using the Pool class.

If you need to pass them explicitly, you may have to use multiprocessing.Process. Here is your reworked example:

from multiprocessing import Process, Array, Queue

def count_it( q, arr, key ):
  count = 0
  for c in arr:
    if c == key:
      count += 1
  q.put((key, count))

if __name__ == '__main__':
  testData = "abcabcs bsdfsdf gdfg dffdgdfg sdfsdfsd sdfdsfsdf"
  # want to share it using shared memory
  toShare = Array('c', testData)

  q = Queue()
  keys = ['a', 'b', 's', 'd']
  workers = [Process(target=count_it, args = (q, toShare, key))
    for key in keys]

  for p in workers:
    p.start()
  for p in workers:
    p.join()
  while not q.empty():
    print q.get(),

Output: ('s', 9) ('a', 2) ('b', 3) ('d', 12)

The ordering of elements of the queue may vary.

To make this more generic and similar to Pool, you could create a fixed N number of Processes, split the list of keys into N pieces, and then use a wrapper function as the Process target, which will call count_it for each key in the list it is passed, like:

def wrapper( q, arr, keys ):
  for k in keys:
    count_it(q, arr, k)
jwilson
+2  A: 

Trying again as I just saw the bounty ;)

Basically I think the error message means what it said - multiprocessing shared memory Arrays can't be passed as arguments (by pickling). It doesn't make sense to serialise the data - the point is the data is shared memory. So you have to make the shared array global. I think it's neater to put it as the attribute of a module, as in my first answer, but just leaving it as a global variable in your example also works well. Taking on board your point of not wanting to set the data before the fork, here is a modified example. If you wanted to have more than one possible shared array (and that's why you wanted to pass toShare as an argument) you could similarly make a global list of shared arrays, and just pass the index to count_it (which would become for c in toShare[i]:).

from sys import stdin
from multiprocessing import Pool, Array, Process

def count_it( key ):
  count = 0
  for c in toShare:
    if c == key:
      count += 1
  return count

if __name__ == '__main__':
  # allocate shared array - want lock=False in this case since we 
  # aren't writing to it and want to allow multiple processes to access
  # at the same time - I think with lock=True there would be little or 
  # no speedup
  maxLength = 50
  toShare = Array('c', maxLength, lock=False)

  # fork
  pool = Pool()

  # can set data after fork
  testData = "abcabcs bsdfsdf gdfg dffdgdfg sdfsdfsd sdfdsfsdf"
  if len(testData) > maxLength:
      raise ValueError, "Shared array too small to hold data"
  toShare[:len(testData)] = testData

  print pool.map( count_it, ["a", "b", "s", "d"] )

[EDIT: The above doesn't work on windows because of not using fork. However, the below does work on Windows, still using Pool, so I think this is the closest to what you want:

from sys import stdin
from multiprocessing import Pool, Array, Process
import mymodule

def count_it( key ):
  count = 0
  for c in mymodule.toShare:
    if c == key:
      count += 1
  return count

def initProcess(share):
  mymodule.toShare = share

if __name__ == '__main__':
  # allocate shared array - want lock=False in this case since we 
  # aren't writing to it and want to allow multiple processes to access
  # at the same time - I think with lock=True there would be little or 
  # no speedup
  maxLength = 50
  toShare = Array('c', maxLength, lock=False)

  # fork
  pool = Pool(initializer=initProcess,initargs=(toShare,))

  # can set data after fork
  testData = "abcabcs bsdfsdf gdfg dffdgdfg sdfsdfsd sdfdsfsdf"
  if len(testData) > maxLength:
      raise ValueError, "Shared array too small to hold data"
  toShare[:len(testData)] = testData

  print pool.map( count_it, ["a", "b", "s", "d"] )

Not sure why map won't Pickle the array but Process and Pool will - I think perhaps it has be transferred at the point of the subprocess initialization on windows. Note that the data is still set after the fork though.

thrope
Even on platforms with fork you can not insert new shared data into toShare after the fork since each process will have its own independent copy at that point.
James Dean
So the real problem seems to be that how we can pickle the information about an Array so it can be send and connected to from the other process.
James Dean
@James - no that's not right. The array has to be set up before the fork, but then it is shared memory that can be changed, with changes visible across all children. Look at the example - I put the data into the array *after* the fork (which occure when Pool() is instantiated). That data could be obtained at run time, after the fork, and as long as it fits into the preallocated shared memory segment it can be copied there and seen from all children.
thrope
You can pickle the Array, but not using Pool.
jwilson
Editted to add working Windows version, using only Pool (by passing the shared array as an initiliazation parameter.
thrope
You are getting closer but there is still the issue that the toShare array length has to be fixed before the pool is created. So you are still creating the shared memory segment before the processes are created.What I really want to see as a general solution is a way to create a new variable length shared array after the pool is created, pass info about it to the worker process and have it read from it.
James Dean
I'm afraid that isn't possible with Pool. You have to create the shared memory beforehand.
thrope
In any case it seems an artificial requirement. If the new set of data is the wrong size for the current shared buffer - you can just close the pool (`pool.close()`), create a new shared array of the required size and open a new pool. For any computational tasks where using multiprocessing is worth it the overhead of closing and opening the pool will be tiny. And the Pool operations are relatively atomic - so it is not like you could inject fresh data in the middle of a map command.
thrope
The assert on pickling the shared data array seems to be an artificial constraint on using the shared resource with multi-processing but given that constraint you have provided some reasonable workarounds so I will give you the points for accepted answer.
James Dean
A: 

The multiprocessing.sharedctypes module provides functions for allocating ctypes objects from shared memory which can be inherited by child processes.

So your usage of sharedctypes is wrong. Do you wish to inherit this array from parent process or you prefer to pass it explicitly? In the former case you have to create a global variable as other answers suggest. But you don't need to use sharedctypes to pass it explicitly, just pass original testData.

BTW, your usage of Pool.map() is wrong. It has the same interface as builtin map() function (did you messed it with starmap()?). Below is working example with, passing array explicitly:

from multiprocessing import Pool

def count_it( (arr, key) ):
    count = 0
    for c in arr:
        if c == key:
            count += 1
    return count

if __name__ == '__main__':
    testData = "abcabcs bsdfsdf gdfg dffdgdfg sdfsdfsd sdfdsfsdf"
    pool = Pool()
    print pool.map(count_it, [(testData, key) for key in ["a", "b", "s", "d"]])
Denis Otkidach
That's not what he wants because in theory testData will be very big - and this method results it in being pickled (requiring extra memory) and copied to each process (requiring at least n x original storage).
thrope
@thrope: you are right, that's why I mentioned both possible ways. Example for using global variable should be obvious, so there is not need to list it.
Denis Otkidach
@Denis - yep, but unfortunately the global method doesn't work on Windows - it relies on fork and unix copy-on-write. If he uses the global method on windows multiprocessing will pickle the data and send it to each child subprocess - again requiring much more memory.
thrope