views:

66

answers:

2

I had to do heavy I/o bound operation, i.e Parsing large files and converting from one format to other format. Initially I used to do it serially, i.e parsing one after another..! Performance was very poor ( it used take 90+ seconds). So I decided to use threading to improve the performance. I created one thread for each file. ( 4 threads)

 for file in file_list:
            t=threading.Thread(target = self.convertfile,args = file)
            t.start()
            ts.append(t)
 for t in ts:
            t.join()

But for my astonishment, there is no performance improvement whatsoever. Now also it takes around 90+ seconds to complete the task. As this is I/o bound operation , I had expected to improve the performance. What am I doing wrong?

+2  A: 

Threading allows the OS to allocate more CPU cores to your program. If it's I/O bound, that means that the speed was limited by the I/O susbsystem speed instead of the CPU speed. In those cases, allocating more CPU cores doesn't necessarily help - you're still waiting on the I/O subsystem.

MSalters
But I believe thread switching happens when a thread is waiting for I/o subsyetm, isn't it? So I am doing the things parallel now which means I can expect some performance improvement ??
kumar
Threading in Python does not allocate more CPU cores to the program.
detly
@kumar: As the response says, if you're I/O bound - your I/O is going as hard as it can - more CPU time or parallel processing isn't going to make the I/O finish any earlier.
Josh
+3  A: 

Under the usual Python interpreter, threading will not allocate more CPU cores to your program because of the global interpreter lock (aka. the GIL).

The multiprocessing module could help you out here. (Note that it was introduced in Python 2.6, but backports exist for Python 2.5.)

As MSalters says, if your program is I/O bound it's debatable whether this is useful. But it might be worth a shot :)

To achieve what you want using this module:

import multiprocessing

MAX_PARALLEL_TASKS = 8 # I have an Intel Core i7 :)

pool = multiprocessing.Pool(MAX_PARALLEL_TASKS)

pool.map_async(convertfile, filelist)

pool.close()
pool.join()

Important! The function that you pass to map_async must be pickleable. In general, instance methods are NOT pickleable unless you engineering them to be so! Note that convertfile above is a function.

If you actually need to get results back from convertfile, there are ways to do that as well. The examples on the multiprocessing documentation page should clarify.

detly
Thanks delty..But multiprocessing module has its own problems.1) I have to refactor my code as I can't use instance methods..2) I have an instance method which has many file handler..Those file handlers are closed in child processes which is not acceptable. So I need to open them again. Unfortunately I have no way to know them as these are passed during instantiation
kumar
It doesn't have to be the conversion function itself that is performed in a separate process. Is there any way you can do the instantiation part in separate processes? Eg. write a function or even a separate script that does a single instantiation and conversion; then write a "master script" that uses the multiprocessing module to run these functions. Separate scripts can be run using the [subprocess](http://docs.python.org/library/subprocess.html) module. If there is a lot of shared data, then yes, that's where multiprocessing gets complicated. But there are a lot more tools in that module :)
detly