views:

55

answers:

4

I have a Python script that does something along the line of:

def MyScript(input_filename1, input_filename2):
   return val;

i.e. for every pair of input, I calculate some float value. Note that val is a simple double/float.

Since this computation is very intensive, I will be running them across different processes (might be on the same computer, might be on multiple computers).

What I did before was I output this value to a text file: input1_input2.txt . Then I will have 1000000 files that I need to reduce into one file. This process is not very fast since OS doesn't like folders that have too many files.

How do I efficiently get all these data into one single computer? Perhaps having MongoDB running on a computer and all the processes send the data along?

I want something easy. I know that I can do this in MPI but I think it is overkill for such a simple task.

A: 

You could run one program that collects the outputs, as example over XMLRPC.

leoluk
+1  A: 

You can use python parallel processing support.

Specially, I would mention NetWorkSpaces.

pyfunc
+1  A: 

You can generate a folder structure that contains generated sub folders that contain generated sub folders.

For example you have a main folder that contains 256 sub folder and each sub folder contains 256 sub folders. 3 levels deep will be enough. You can use sub strings of guids for generating unique folder names.

So guid AB67E4534678E4E53436E becomes folder AB that contains sub folder 67 and that folder contains folder E4534678E4E53436E.

Using 2 substrings of 2 characters makes it possible to genereate 256 * 256 folders. More than enough to store 1 million files.

TTT
+1  A: 

If the inputs have a natural order to them, and each worker can find out "which" input it's working on, you can get away with one file per machine. Since Python floats are 8 bytes long, each worker would write the result to its own 8-byte slot in the file.

import struct

RESULT_FORMAT = 'd' # Double-precision float.
RESULT_SIZE = struct.calcsize(RESULT_FORMAT)
RESULT_FILE = '/tmp/results'

def worker(position, input_filename1, input_filename2):
    val = MyScript(input_filename1, input_filename2)
    with open(RESULT_FILE, 'rb+') as f:
        f.seek(RESULT_SIZE * position)
        f.write(struct.pack(RESULT_FORMAT, val))

Compared to writing a bunch of small files, this approach should also be a lot less I/O intensive, since many workers will be writing to the same pages in the OS cache.

(Note that on Windows, you may need some additional setup to allow sharing the file between processes.)

dhaffey