tags:

views:

297

answers:

1
+4  A: 

Sounds like you'll want to look at PyProcessing, now included in Python 2.6 and beyond as multiprocessing. It takes care of a lot of the machinery of dealing with multiple processes.

An alternative architectural model is to setup a work queue using something like beanstalkd and have each of the "servers" pull jobs from the queue. That way you can add servers as you wish, swap them out, etc, without having to worry about registering them with the manager (this is assuming the work you're spreading over the servers can be quantified as "jobs").

Finally, it may be worthwhile to build the whole thing on HTTP and take advantage of existing well known and highly scalable load distribution mechanisms, such as nginx. If you can make the communication HTTP based then you'll be able to use lots of off-the-shelf tools to handle most of what you describe.

Parand