If I wanted to have Python distributed across multiple processors on multiple computers, what would my best approach be? If I have 3 eight-core servers, that would mean I would have to run 24 python processes. I would be using the multiprocessing library, and to share objects it looks like the best idea would be to use a manager. I want all the nodes to work together as one big process, so one manager would be ideal, yet that would give my server a single point of failure. Is there a better solution? Would replicating a manager's object store be a good idea?
Also, if the manager is going to be doing all of the database querying, would it make sense to have it on the same machine as the database?