views:

204

answers:

5

I came accross this on the Mike Ash "Care and feeding of singletons" and was a little puzzeled by his comment:

This code is kind of slow, though. Taking a lock is somewhat expensive. Making it more painful is the fact that the vast majority of the time, the lock is pointless. The lock is only needed when foo is nil, which basically only happens once. After the singleton is initialized, the need for the lock is gone, but the lock itself remains.

+(id)sharedFoo {
    static Foo *foo = nil;
    @synchronized([Foo class]) {
        if(!foo) foo = [[self alloc] init];
    }
    return foo;
}

My question is, and there is no doubt a good reason for this but why can't you write (see below) to limit the lock to when foo is nil?

+(id)sharedFoo {
    static Foo *foo = nil;
    if(!foo) {
        @synchronized([Foo class]) {
            foo = [[self alloc] init];
        }
    }
    return foo;
}

cheers gary

+7  A: 

Because then the test is subject to a race condition. Two different threads might independently test that foo is nil, and then (sequentially) create separate instances. This can happen in your modified version when one thread performs the test while the other is still inside +[Foo alloc] or -[Foo init], but has not yet set foo.

By the way, I wouldn't do it that way at all. Check out the dispatch_once() function, which lets you guarantee that a block is only ever executed once during your app's lifetime (assuming you have GCD on the platform you're targeting).

Graham Lee
That’s of course true. But wouldn’t the best solution be to test twice (inside **and** outside of the `@synchronized`). Then there would be no race condition nor performance penalty.
Nikolai Ruhe
@Nikolai: tell me there's a performance penalty _after_ you've run Shark. :-)
Graham Lee
@Graham: There’s no doubt that performance is bad in the original version that always takes the lock. I had it in my code *and I did run Shark* ;). Also, Mike Ash pointed it out in his original blog post.
Nikolai Ruhe
Hi Graham, at the moment I am developing for the iPhone, so I really just wanted to understand what was happening.
fuzzygoat
@Nikolai: Mike doesn't always run Shark. But if performance is bad for you, how frequently are you accessing this singleton? Must be thousands of times a second or more to get any noticeable cost.
Graham Lee
@Graham: In my case I searched a database of 800,000 records. In the hot loop, I accessed the database using the singleton pattern. Searching was twice as fast after moving the `[MyDB sharedDB]` out of the loop. I came to the conclusion that synchronization is really slow (at least on the arm platform I was using) and I looked for alternatives. Mike Ash (and @mfazekas below) explained, why double checked locking is no good idea.
Nikolai Ruhe
+1  A: 

In your version the check for !foo could be occurring on multiple threads at the same time, allowing two threads to jump into the alloc block, one waiting for the other to finish before allocating another instance.

jessecurry
+1  A: 

You can optimize by only taking the lock if foo==nil, but after that you need to test again (within the @synchronized) to guard against race conditions.

+ (id)sharedFoo {
    static Foo *foo = nil;
    if(!foo) {
        @synchronized([Foo class]) {
            if (!foo)  // test again, in case 2 threads doing this at once
                foo = [[self alloc] init];
        }
    }
    return foo;
}
David Gelhar
See @mfazekas answer for why this is wrong.
Nikolai Ruhe
+1  A: 

Graham makes an excellent point, however I think if you rewrite the code slightly, you can avoid the race:

+(id)sharedFoo {
    static Foo *foo;
    if (foo)
        return foo;

    @synchronized([Foo class]) {
        if (!foo)
            foo = [[self alloc] init];
    }
    return foo;
}
jlehr
This code has exactly the same problems as the OP.
Nikolai Ruhe
Right, there'd need to be another if statement inside the @synchronized block.
jlehr
This (edited) version above is effectively the same as the double-checked locking pattern in Java, which has always been broken because of Java's memory model. Does Objective-C have the same issues?
Mark Smith
Objective-C guarantees that the entire synchronized block is atomic. "The @synchronized()directive locks a section of code for use by a single thread. Other threads are blocked until the thread exits the protected code; that is, when execution continues past the last statement in the @synchronized() block." [The Objective-C Programming Language reference.] http://developer.apple.com/mac/library/documentation/Cocoa/Conceptual/ObjectiveC/Articles/ocThreading.html
jlehr
No you're interpreting it wrong. This "atomicity" applies only to other threads using the @synchronize with the same argument.
mfazekas
+2  A: 

This is called the double checked locking "optimization". As documented everywhere this is not safe. Even if it's not defeated by a compiler optimization, it will be defeated the way memory works on modern machines, unless you use some kind of fence/barriers.

Mike Ash also shows the correct solution using volatile and OSMemoryBarrier();.

The issue is that when one thread executes foo = [[self alloc] init]; there is no guarantee that when an other thread sees foo != 0 all memory writes performed by init is visible too.

Also see DCL and C++ and DCL and java for more details.

mfazekas
+1 Thanks for making this clear. Instruction reordering and out-of-order memory access are both concepts that most programmers are not aware of.
Nikolai Ruhe