views:

106

answers:

2

I have a custom Cache implementation, which allows to cache TCacheable<TKey> descendants using LRU (Least Recently Used) cache replacement algorithm.

Every time an element is accessed, it is bubbled up to the top of the LRU queue using the following synchronized function:

// a single instance is created to handle all TCacheable<T> elements
public class Cache()
{
    private TCacheable<T> oldest, newest;

    private object syncQueue = new object();
    private void topQueue(TCacheable<T> el)
    {
        lock (syncQueue)
        if (newest != el)
        {
            if (el.elder != null) el.elder.newer = el.newer;
            if (el.newer != null) el.newer.elder = el.elder;

            if (oldest == el) oldest = el.newer;
            if (oldest == null) oldest = el;

            if (newest != null) newest.newer = el;
            el.newer = null;
            el.elder = newest;
            newest = el;
        }
    }
}

The bottleneck in this function is the lock() operator, which limits cache access to just one thread at a time.

Question: Is it possible to get rid of lock(syncQueue) in this function while still preserving the queue integrity?

A: 

Easiest first step would be to move the lock statement inside the if (newest != el) block, since you don't need to wait on the lock at all if newest == el.

Aside from that, you're making some conditional assignments. Do you really have profiling information that leads you to believe this section is problematic in terms of performance?

If you're really really sure that optimizing this class is the best use of your time, here's a PDF of the paper "Simple, Fast, and Practical Non-Blocking and Blocking Concurrent Queue Algorithms". Here's an excellent blog post that might be a little bit more straightforward.

anthony
+1  A: 

The trick to a concurrent LRU cache is not to remove the lock - that's a rabbit hole - but amortize the need to aquire it. The queue can be eventually consistent with the table during reads, but it should be consistent after a write to properly chose a victim to evict. Thus, you can buffer reorderings and avoid lock contention. I proved the validity of this approach in my Java implementation: http://code.google.com/p/concurrentlinkedhashmap/

So while I can offer solutions to answer "Yes" to your question, a better answer is that you don't need to remove the lock but understand when its its actually needed.

Ben Manes
this is a great answer.
anthony