views:

68

answers:

6

I'm trying to design a simple cache that follows the following rules:

  1. Entries have a unique key
  2. When the number of entries in the cache exceeds a certain limit, the older items are evicted (to keep the cache from getting too large).
  3. Each entry's data is immutable until the entry is removed from the cache.
  4. A 'reader' can access an entry in the cache and the entry must be valid for the lifetime of the reader.
  5. Each reader can be on its own thread, and all readers access the same cache instance.

Thread safety with this cache is important, since we don't want readers holding a reference to an entry, only to have it evicted by another thread somewhere else.

Hence, my current implementation just copies out the whole entry when reading from the cache. This is fine for smaller objects, but once objects get too large there's too much copying going on. It's also not so great with large numbers of readers that are accessing the same cached entry.

Since the data is immutable, it would be great if every reader to the same message could just hold a reference instead of a copy, but in some thread safe manner (so it wouldn't get evicted).

A previous implementation used reference counting to achieve this...but it's very tricky with threads, and I went with this simpler approach.

Are there any other patterns/ideas I could use to improve this design?

A: 

Sounds like a monitor with a std::map as the buffer would be useful in this situation.

Mark
A: 

I'm thinking that if you want to share a reference, you'll need to keep a count. So long as you use the interlocked inc/dec functions, this should be simple enough even for multiple threads.

Steven Sudit
A: 

It seems to me that the reference counting solution is only tricky in that the updating/testing for eviction of said reference counters must be inside a critical section protected by a mutex. So long as more than one process doesn't access the reference counters at a time it should be thread safe.

Elemental
+1  A: 

I think you effectively want a reader/writer lock per entry. Readers read lock and unlock as they are using it. The eviction thread has to obtain a write lock (which forces all readers to complete before it can be acquired). There needs to be some way for a reader to then tell (before acquiring a read lock) whether the entry in question has been evicted concurrently.

On the downside, one lock per entry is expensive for a big cache (in terms of memory). You can address that by using a lock across for a set of entries - this trades off memory vs concurrency. Need to be careful about deadlock scenarios in that case.

Alex Miller
+1  A: 

In a native system without a higher power (such as a VM) capable of performing garbage collection, you aren't going to do much better performance or complexity wise than reference counting.

You are are correct the reference counting can be tricky - not only does the increment and decrement have to atomic, but you need to ensure that the object can't be deleted out from under you before you are able to increment it. Thus, if you store the reference counter inside the object, you'll have to somehow avoid the race that occurs between the time you read the pointer to the object out of the cache, and manage to increment the pointer.

If your structure is a standard container, which is not already thread-safe, you will also have to protect the container from unsupported concurrent access. This protection can dovetail nicely with avoiding the reference counting race condition described above - if you use a read-writer lock to protect the structure, combined with atomic increments of the in-object reference counter while still holding the reader lock, you'll be protected from anyone deleting the object out from under you before you get the reference count, since such mutators must be "writers".

Here, objects can be evicted from the cache while still having a positive reference count - they will be destroyed when the last outstanding reference is dropped (by your smart pointer class). This is typically considered a feature, since it means that at least some object can always be removed from the cache, but it also has the downside that there is no strict upper on the number of objects "alive" in memory, since the reference counting allows objects to say alive even after they've left the cache. Whether this is acceptable to you depends on your requirements and details such as how long other threads may hold references to objects.

If you don't have access to (non-standard) atomic increment routines, you can use a mutex to do the atomic increment/decrement, although this may increase the cost significantly in both time and per-object space.

If you want to get more exotic (and faster) you'll need to design a container which is itself threadsafe, and come up with a more complex reference counting mechanism. For example, you may be able to create a hash table where the primary bucket array is never re-allocated, so can be accessed without locking. Furthermore, you can use non-portable double-wide CAS (compare and swap) operations on that array to both read a pointer and increment a reference count adjacent to it (128 bits of stuff on a 64-bit arch), allowing you to avoid the race mentioned above.

A completely different track would be to implement some kind of "delayed safe delete" strategy. Here avoid reference counting entirely. You remove references from your cache, but do not delete objects immediately, since other threads may still hold pointers to the object. Then later at some "safe" time you delete the object. Of course, the trick is discover when such a safe time exists. Basic strategies involve each thread signaling when they "enter" and "leave" a danger zone during which they may access the cache and hold references to contained objects. Once all threads which were in the danger zone when an object was removed from the cache have left the danger zone, you can free the object while being sure that no more references are held.

How practical this is depends on whether you have logical "enter" and "leave" points in your application (many request-oriented applications will), and whether the "enter" and "leave" costs can be amortized across many cache accesses. The upside is no reference counting! Of course, you still need a thread-safe container.

You can find references to many academic papers on the topic and some practical performance considerations by examining the papers linked here.

BeeOnRope
A: 

Have a circular queue, and don't allow multiple threads to write to it or the cache will be useless. Each thread should have its own cache, maybe with read access to the other caches but not write access.

Charles Eli Cheese