views:

478

answers:

3

Problem: I have made a class which plays a big 300 x 300 px image sequence animation of 100 frames. This class has a -start method that kicks off the animation, and then a -animate: method that walks through the frames. At every frame it fetches the big chunk of bitmap data from a png, wraps that into an UIImage and assigns that to an UIImageView's image property. Then it calls a delayed selector on itself to take the next frame:

[self performSelector:@selector(animate:) withObject:nil afterDelay:delay];

the animation itself is smooth, but the whole UI freezes up until it finished. On the iPhone Simulator the UI does not freeze. So I assume CPU is running at 100% when performing this animation.

Now the question is: What kind of multithreading-technique or strategy could help out in a situation like this?

Should I start looking at POSIX? Or any other API / Library / Framework? Or should I rely on the NSObject methods for creating threads or use NSThread for that?

I tried to put the whole animation into a new NSThread thread, but that doesn't help anything. As far as I know, I have to call any UI code on the main thread. So when updating the image property of the UIImageView during the animation, this has to be done on the main thread. So the only real part where I could "save time" for the UI-rensponsiveness is when fetching the bitmap data out of the PNG? Or is there something else that would help out?

A: 

I would start by checking for leaks using the leaks instruments. Also, look at Object Alloc, to see that nothing to crazy is happening.

iPhoneBoy
+2  A: 

Ola,

You mentioned putting the whole thing in a thread. That's part of the problem. The links below go into different aspects of updating the user interface from a thread.

In short, your main thread will start a child thread, the child thread will perform the extraction and other calculations, the child thread will call a function 'in the main thread' using PerformSelectorOnMainThread: (look for something like that in the pages I link to below), the function called with PerformSelectorOnMainThread will update the UI, your child thread will sleep some amount of time, your child thread will keep on tickin'.

Some useful links:

x. 9media.com/blog/?p=195

x. forums.macrumors.com/showthread.php?t=683250

x. www.xprogress.com/post-36-threading-tutorial-using-nsthread-in-iphone-sdk-objective-c

Most importantly, read the documentation for NSThread.

-isdi-

ISDi
+1  A: 

Before getting too far into multithreading this, I'd recommend running Shark's Time Profile against your application while it's executing on the device (not the simulator). With the appropriate data mining (mostly, charging lower level libraries to their callers), you should be able to quickly see where the hotspot is in your running application. If it's the actual display to the screen, then multithreading may not gain you much. In that case, you might want to investigate more efficient display methods. As a data point, Mo Dejong reported here that he was able to animate 30 480x320 PNGs at 15 FPS on an original iPhone using non-multithreaded code.

When it comes to multithreading, there are two main approaches on the iPhone: manually managed NSThreads and queue-based NSOperations. NSThreads can be simpler to set up (using detachNewThreadSelector:toTarget:withObject: or NSObject's performSelectorInBackground:withObject:), but you have to manually manage their execution and worry a lot about access to shared resources. NSOperations and NSOperationQueues may require a little more code to set up, but they can make things a lot easier for you by coordinating execution order. Additionally, in many cases you can create a single-wide NSOperationQueue for access to some shared resource and avoid expensive locks around that resource.

I've moved almost all of my multithreaded code to NSOperation over the last year, and seen significant performance benefits (albeit, most of this code is on the Mac). For example, I had a similar task where I needed to grab a frame from a CCD camera, process the frame, and display it to the screen. I split this into three NSOperationQueues, each a single operation wide. One queue contained operations that pulled the frame from the camera and inserted a processing operation into the second queue. Once the processing operation was finished, an operation was created for updating the display and inserted into the third queue. I found that the overhead of creating a new NSOperation for each frame handling task was far outweighed by the performance benefits of not having to lock and unlock certain resources.

Brad Larson