views:

86

answers:

2

Preface: This has become a quite a long post. While I'm not new to programming, I have close to zero practice around threading and could need some help here...

This is probably a common problem that could be described in shorted words, but I'm a bit overwhelmed with it.

Some background first...

I'm writing code for the iPhone where I get into performance problems due to the slowness of the machine. I'm trying optimizations to keep the UI snappy, and currently I'm looking into adding some threading.

Imagine this: I have a large database that the user can search. To search, the user switches to a specific view, where he gets an edit box to enter his search text. Every time the user types a character in the search box, a search is run synchronuously and the results are presented right away.

Originally, the data was in a sqlite db, and while the search was ready instantly, the search always took several seconds, making the UI feel sluggish even when I ran the search in a thread, and updated the results list only once the search was finished seconds later.

So I changed the search code to be much much faster, it's below a tenth of a second now, meaning I have no delay any more during search input.

Problem is that for the search to be that fast, I need to do some lengthy preparation before I can start searching. This preparation takes 1-2 seconds. And it creates a large amount of objects in memory that I do not want to keep around if not needed.

So I am running the preparation in a thread, during the time when the search view is appearing. Most of this is animated, so the preparation can do its work in the mean time without the user even noticing.

And if the search view is unloaded, I need to release the cache again. This, too, takes a while (about 1/2 on latest models), so I'd like to perform this in a thread as well, as otherwise the switch to another view would have a noticable delay.

All this appears not difficult, at first sight. I have two functions to prepare and release the cache, which look like this:

- (void) internalPrepareCache
{
 NSAutoreleasePool *pool = nil;
 if (![NSThread isMainThread]) pool = [[NSAutoreleasePool alloc] init];
 [cacheLock lock];
 if (!cacheReady) {
  Load Data Cache...; // can take a while
  cacheReady = true;
 }
 [cacheLock unlock];
 [pool release];
}

- (void) internalReleaseCache
{
 NSAutoreleasePool *pool = nil;
 if (![NSThread isMainThread]) pool = [[NSAutoreleasePool alloc] init];
 [cacheLock lock];
 if (!cacheReady) {
  Release Data Cache...; // can take a while
  cacheReady = false;
 }
 [cacheLock unlock];
 [pool release];
}

Then there are the functions that get invoked by the view controller, from the main thread:

// this gets called by the view controller when loaded:
- (void) threadedPrepareCache
{
 [NSThread detachNewThreadSelector:@selector(internalPrepareCache) toTarget:self withObject:nil];
}

// this gets called by the view controller upon unload:
- (void) threadedReleaseCache
{
 [NSThread detachNewThreadSelector:@selector(internalReleaseCache) toTarget:self withObject:nil];
}

// this gets called by the view controller to perform a search
- (void) searchUsingCache:...
{
 [self internalPrepareCache];
 Perform the search ...
}

As the code shows, I am using a global NSLock object that I use to wrap lock and unlock calls around both the cache-preparation and the cache-release code. I also have a global state variable telling whether the cache is ready or not.

It gets complicated because of two special situations:

1.) The user could very quickly switch the search view in and out repeatedly. This could queue up several cache-preparation and cache-release operations up to the point where a cache is starting to be prepared while the last action of the user was to dismiss the search. I like to avoid that.

2.) If the user is fast (or the iPhone is very slow) and enters a search already before the cache-preparation thread is finished, the non-threaded search needs to wait for the cache getting ready. I am worries that this, too, could get in conflict with the queued up actions from (1).

3.) I did the following test:

[self threadedPrepareCache];
[self threadedReleaseCache];
[self threadedPrepareCache];
[self threadedReleaseCache];

This test shows that the order I intended is not followed: release happens first (when there's nothing to release yet). This is an extreme example, but it tells me that my above situations might not be correctly programmed yet and I might as well end up with a final preparation when I meant to release it with the last call.

How do I solve this?

I am thinking of having another global var that declares the currently desired cache state: It is set by the main thread's functions that ask for the cache to be prepared or be discarded. Then, the lock-protected code in both threaded functions checks the currently desired state and acts accordingly. This would prevent the needless runaround of situation (1), right? But how do I make sure that there's not a race condition around this? There's no need to put a lock around setting of this desired-state var, is there?

And do I have to worry about (2)? Currently, the search function simply always calls internalPrepareCache synchronuously (from the main thread) and thus waits for it to get ready. Is that safe?

+1  A: 

Apologies if this is off base, even though I read through your post several times.

I think what you'd want to strive for is the following:

  • keep the cache prep/breakdown work OFF the main thread, because the UI experience pretty much has to be done in main thread and you don't want to bog that down.

  • Be able to signal the cache prep thread that it needs to abort before completion due to a fickle user deciding to go back, whatever... and have that prep thread take responsibility for cleaning up after itself because it's being aborted, either by invoking the cleanup stuff in the same thread, or doing the work itself.

So try using [myObj performSelectorInBackground...] to start the cache preparation and that threaded method should monitor a property which you can set from your UI code to tell it to stop/abort in the case that the view is unloading. When unwinding (cleaning the cache), also perform that in background, but take into consideration one way or another whether the setup was aborted and will cleanup after itself.

Ultimately, the expensive stuff should be off the main thread and suitable signaling in place between the UI and support thread(s).

wkw
First, thank you for trying hard to to understand my post :)After reading Apple's docs on "Operation Queues", I, too, realized that I'll need a way to cancel the lengthy preparation code.
Thomas Tempelmann
Apple Doc's: fun bedtime reading...
wkw
A: 

Apple's "Concurrency Programming Guide" helped me in the end, especially the concept of Operation Queues, with "serialized dispatching":

What I needed was a single task, which has a queue of the operations to perform (prepare, release), and so it performs them in sequence.

I also need a way to check if the current operation is a prepare operation. Lastly, I added a few checks to the prepare function to check if it was asked to cancel its work prematurely.

I combined this with a function to wait for completion of an operation and for clearing the current queue, and I've now got a well working solution.

I guess just writing my problem down here was half of the path to a solution already. So, thanks for your patience ;)

Thomas Tempelmann