I have a Python script that does something along the line of:
def MyScript(input_filename1, input_filename2):
return val;
i.e. for every pair of input, I calculate some float value. Note that val is a simple double/float.
Since this computation is very intensive, I will be running them across different processes (might be on the same computer, might be on multiple computers).
What I did before was I output this value to a text file: input1_input2.txt . Then I will have 1000000 files that I need to reduce into one file. This process is not very fast since OS doesn't like folders that have too many files.
How do I efficiently get all these data into one single computer? Perhaps having MongoDB running on a computer and all the processes send the data along?
I want something easy. I know that I can do this in MPI but I think it is overkill for such a simple task.