views:

64

answers:

1

Update : For anyone wondering what I went with at the end - I divided the result-set into 4 and ran 4 instances of the same program with one argument each indicating what set to process. It did the trick for me. I also consider PP module. Though it worked, it prefer the same program. Please pitch in if this is a horrible implementation! Thanks..

Following is what my program does. Nothing memory intensive. It is serial processing and boring. Could you help me convert this to more efficient and exciting process? Say, I process 1000 records this way and with 4 threads, I can get it to run in 25% time!

I read articles on how python threading can be inefficient if done wrong. Even python creator says the same. So I am scared and while I am reading more about them, want to see if bright folks on here can steer me in the right direction. Muchos gracias!

def startProcessing(sernum, name):
    '''
    Bunch of statements depending on result,
    will write to database (one update statement)

    Try Catch blocks which upon failing,
    will call this function until the request succeeds.    
    '''

for record in result:
    startProc = startProcessing(str(record[0]), str(record[1]))
+1  A: 

Python threads can't run at the same time due to the Global Interpreter Lock; you want new processes instead. Look at the multiprocessing module.

(I was instructed to post this as an answer =p.)

katrielalex