I have written a python script which watches a directory for new subdirectories, and then acts on each subdirectory in a loop. We have an external process which creates these subdirectories. Inside each subdirectory is a text file and a number of images. There is one record (line) in the text file for each image. For each subdirectory my script scans the text file, then calls a few external programs, one detects blank images (custom exe), then a call to "mogrify" (part of ImageMagick) which resizes and converts the images and finally a call to 7-zip which pacakges all of the converted images and text file into a single archive.
The script runs fine, but is currently sequential. Looping over each subdirectory one at a time. It seems to me that this would be a good chance to do some multi-processing, since this is being run on a dual-CPU machine (8 cores total).
The processing of a given subdirectory is independent of all others...they are self-contained.
Currently I am just creating a list of sub-directories using a call to os.listdir() and then looping over that list. I figure I could move all of the per-subdirectory code (conversions, etc) into a separate function, and then somehow create a separate process to handle each subdirectory. Since I am somewhat new to Python, some suggestions on how to approach such multiprocessing would be appreciated. I am on Vista x64 running Python 2.6.