I have a custom Cache implementation, which allows to cache TCacheable<TKey>
descendants using LRU (Least Recently Used) cache replacement algorithm.
Every time an element is accessed, it is bubbled up to the top of the LRU queue using the following synchronized function:
// a single instance is created to handle all TCacheable<T> elements
public class Cache()
{
private TCacheable<T> oldest, newest;
private object syncQueue = new object();
private void topQueue(TCacheable<T> el)
{
lock (syncQueue)
if (newest != el)
{
if (el.elder != null) el.elder.newer = el.newer;
if (el.newer != null) el.newer.elder = el.elder;
if (oldest == el) oldest = el.newer;
if (oldest == null) oldest = el;
if (newest != null) newest.newer = el;
el.newer = null;
el.elder = newest;
newest = el;
}
}
}
The bottleneck in this function is the lock()
operator, which limits cache access to just one thread at a time.
Question: Is it possible to get rid of lock(syncQueue)
in this function while still preserving the queue integrity?