views:

927

answers:

9

Hello,

Keeping the GUI responsive while the application does some CPU-heavy processing is one of the challenges of effective GUI programming.

Here's a good discussion of how to do this in wxPython. To summarize, there are 3 ways:

  1. Use threads
  2. Use wxYield
  3. Chunk the work and do it in the IDLE event handler

Which method have you found to be the most effective ? Techniques from other frameworks (like Qt, GTK or Windows API) are also welcome.

+12  A: 

Threads. They're what I always go for because you can do it in every framework you need.

And once you're used to multi-threading and parallel processing in one language/framework, you're good on all frameworks.

Oli
Except on frameworks that don't have real threading (BREW).
Airsource Ltd
A: 

Working with Qt/C++ for Win32.

We divide the major work units into different processes. The GUI runs as a separate process and is able to command/receive data from the "worker" processes as needed. Works nicely in todays multi-core world.

Jesse
+1  A: 

Threads or processes depending on the application. Sometimes it's actually best to have the GUI be it's own program and just send asynchronous calls to other programs when it has work to do. You'll still end up having multiple threads in the GUI to monitor for results, but it can simplify things if the work being done is complex and not directly connected to the GUI.

tloach
A: 

This answer doesn't apply to the OP's question regarding Python, but is more of a meta-response.

The easy way is threads. However, not every platform has pre-emptive threading (e.g. BREW, some other embedded systems) If possibly, simply chunk the work and do it in the IDLE event handler.

Another problem with using threads in BREW is that it doesn't clean up C++ stack objects, so it's way too easy to leak memory if you simply kill the thread.

Airsource Ltd
A: 

I use threads so the GUI's main event loop never blocks.

Corey Goldberg
A: 

For some types of operations, using separate processes makes a lot of sense. Back in the day, spawning a process incurred a lot of overhead. With modern hardware this overhead is hardly even a blip on the screen. This is especially true if you're spawning a long running process.

One (arguable) advantage is that it's a simpler conceptual model than threads that might lead to more maintainable code. It can also make your code easier to test, as you can write test scripts that exercise these external processes without having to involve the GUI. Some might even argue that is the primary advantage.

In the case of some code I once worked on, switching from threads to separate processes led to a net reduction of over 5000 lines of code while at the same time making the GUI more responsive, the code easier to maintain and test, all while improving the total overall performance.

Bryan Oakley
In what way are processes simpler conceptually than threads ? Exchanging data between threads is much simpler than between processes, for example.
Eli Bendersky
And that's exactly why processes are simpler; no shared data. Doing threaded programming correctly (locking, avoiding race conditions, etc) is a Hard Problem, and if you think it's simple You're Doing It Wrong.
Carl Meyer
Well, if you have no shared data, there are no problems with threads either. But sometimes you must have shared data, and it is difficult to achieve effectively with processes.
Eli Bendersky
+6  A: 

Definitely threads. Why? The future is multi-core. Almost any new CPU has more than one core or if it has just one, it might support hyperthreading and thus pretending it has more than one. To effectively make use of multi-core CPUs (and Intel is planing to go up to 32 cores in the not so far future), you need multiple threads. If you run all in one main thread (usually the UI thread is the main thread), users will have CPUs with 8, 16 and one day 32 cores and your application never uses more than one of these, IOW it runs much, much slower than it could run.

Actual if you plan an application nowadays, I would go away of the classical design and think of a master/slave relationship. Your UI is the master, it's only task is to interact with the user. That is displaying data to the user and gathering user input. Whenever you app needs to "process any data" (even small amounts and much more important big ones), create a "task" of any kind, forward this task to a background thread and make the thread perform the task, providing feedback to the UI (e.g. how many percent it has completed or just if the task is still running or not, so the UI can show a "work-in-progress indicator"). If possible, split the task into many small, independent sub-tasks and run more than one background process, feeding one sub-task to each of them. That way your application can really benefit from multi-core and get faster the more cores CPUs have.

Actually companies like Apple and Microsoft are already planing on how to make their still most single threaded UIs themselves multithreaded. Even with the approach above, you may one day have the situation that the UI is the bottleneck itself. The background processes can process data much faster than the UI can present it to the user or ask the user for input. Today many UI frameworks are little thread-safe, many not thread-safe at all, but that will change. Serial processing (doing one task after another) is a dying design, parallel processing (doing many task at once) is where the future goes. Just look at graphic adapters. Even the most modern NVidia card has a pitiful performance, if you look at the processing speed in MHz/GHz of the GPU alone. How comes it can beat the crap out of CPUs when it comes to 3D calculations? Simple: Instead of calculating one polygon point or one texture pixel after another, it calculates many of them in parallel (actually a whole bunch at the same time) and that way it reaches a throughput that still makes CPUs cry. E.g. the ATI X1900 (to name the competitor as well) has 48 shader units!

Mecki
My cell phone is not multi-core, nor will it probably be for some time.
tloach
+1  A: 

Threads - Let's use a simple 2-layer view (GUI, application logic).

The application logic work should be done in a separate Python thread. For Asynchronous events that need to propagate up to the GUI layer, use wx's event system to post custom events. Posting wx events is thread safe so you could conceivably do it from multiple contexts.

Working in the other direction (GUI input events triggering application logic), I have found it best to home-roll a custom event system. Use the Queue module to have a thread-safe way of pushing and popping event objects. Then, for every synchronous member function, pair it with an async version that pushes the sync function object and the parameters onto the event queue.

This works particularly well if only a single application logic-level operation can be performed at a time. The benefit of this model is that synchronization is simple - each synchronous function works within it's own context sequentially from start to end without worry of pre-emption or hand-coded yielding. You will not need locks to protect your critical sections. At the end of the function, post an event to the GUI layer indicating that the operation is complete.

You could scale this to allow multiple application-level threads to exist, but the usual concerns with synchronization will re-appear.

edit - Forgot to mention the beauty of this is that it is possible to completely decouple the application logic from the GUI code. The modularity helps if you ever decide to use a different framework or use provide a command-line version of the app. To do this, you will need an intermediate event dispatcher (application level -> GUI) that is implemented by the GUI layer.

Jeremy Brown
+2  A: 

I think delayedresult is what you are looking for:

http://www.wxpython.org/docs/api/wx.lib.delayedresult-module.html

See the wxpython demo for an example.

uhzzre