views:

560

answers:

3

If I am using ReaderWriterLockSlim to acquire read/write locks, do I need to make my variables volatile or use Interlocked.Increment?

For example, would the code in the Add method below work fine, or does it need enhancement?

public class AppendableList<T> { // semi-immutable; supports appending only
    private T[] data = new T[16];
    private ReaderWriterLockSlim rwLock = new ReaderWriterLockSlim();
    public int Count { get; private set; }
    public T this[int index] {
        get {
            rwLock.EnterReadLock();
            try { return data[index]; } finally { rwLock.ExitReadLock(); }
        }
    }
    public void Add(T item) {
        rwLock.EnterUpgradeableReadLock();
        try {
            if (Count == data.Length)
                reAllocateArray(); // upgrades to write lock
            data[Count++] = item; // do I need to use Interlocked here?
        } finally { rwLock.ExitUpgradeableReadLock(); }
    }
}

EDIT: I'm trying to write a light-weight, fast and simple list that allows multiple threads to access its data concurrently (sort of producer-consumer buffer). I have edited the code above removing the simplifications I used before, so the issue should be clearer now. It seems to me that this code is thread-safe, but I am not sure whether all threads will see the updated value of Count right after exiting the upgradeable lock.

EDIT 2: The "Write" lock here is used to indicate writing to the array reference, not the array elements. I'm assuming this is sufficient (since the data itself is immutable). I guess that I need to use Interlocked when incrementing Count. Is that true?

+3  A: 

I would fully expect the write-lock to function as a memory barrier (within the write-lock, in particular), but I can't prove it off-hand.

Whether you need the complexity of ReaderWriterLockSlim depends on the context; Interlocked, volatile, lock or [MethodImpl] might each do the job far more simply. You mainly need ReaderWriterLock[Slim] if you have lots of readers and few writers.

However, the get is not currently protected by the lock; you'll need to us an explicit property implementation and take out a read lock yourself if you ever need multiple operations to be spanned by the write-lock (without readers seeing intermediate values).

As an aside, the Count++ usage would probably be more familiar to people.

You should also use try/finally to ensure you release the lock on exception.

To avoid the issue of taking write then read locks, perhaps:

    private ReaderWriterLockSlim rwLock = new ReaderWriterLockSlim();

    private int count;
    public int Count {
        get {
            rwLock.EnterReadLock();
            int tmp = count;
            rwLock.ExitReadLock();
            return tmp;
        }
    }
    public void Add(object x) {
        rwLock.EnterWriteLock();
        try {
            // do some processing
            count++;
        } finally {
            rwLock.ExitWriteLock();
        }
    }


Updated re your edit;

That looks pretty solid. A List<T> would be my recommendation (over a T[] array), since it will do all the doubling etc internally, saving you lots of code. Since only one thread can update Count at a time, there is no need for Interlocked, and this property saves the need for a lock when reading Count, as long as you're fine for callers to get the old Count while another thread is adding rows (rather than being blocked).

Marc Gravell
You might also consider marking count volatile (as opposed to a full read lock), depending on what you're actually using it for. If you need to be 100% sure the write is 100% finished before seeing the new value, use a read lock. If it doesn't matter, volatile should provide adequate protection.
Eric Rosenberger
On the other hand, if your property is something more complex than, for instance, an int, you will need a read lock; volatile won't prevent you from seeing partially-updated complex structures.
Eric Rosenberger
Note: for a collection like you've implied here, a read lock is probably the way to go; but you should evaluate case-by-case how much protection you really need.
Eric Rosenberger
Thanks Marc and Eric. I edited my question to clarify it. Previously I had simplified the example to be consice, thus removed try/finally and irrelevant code. I've rewritten the example to clarify my question.
Hosam Aly
I am assuming that the code is thread-safe, because elements are immutable, and the count is only incremented when more data is available.
Hosam Aly
But now I'm not sure! If the JIT changes the order of instructions, then a thread might get wrong data (e.g. null) if it sees the updated Count before the assignment to the array element takes place. In this case even volatile wouldn't save me. So should I use Interlocked instead?
Hosam Aly
@Hosam - I'll update re your edit
Marc Gravell
Thanks. So one final question: Is there a way to make sure that other threads will always see the updated Count after exiting the lock?
Hosam Aly
I think that should already happen. If not, well, `volatile` or including the `get` in the read-lock should do it.
Marc Gravell
Actually, no, there still may be a race in your edited code; if two threads are adding, they can read Count over each-other. I would use an exclusive lock for Add (to keep it simple but provably right), probably just via Monitor (lock).
Marc Gravell
But how? Since only one thread is allowed to EnterUpgradeableReadLock at any given time, this case should never happen, right?
Hosam Aly
@Hosam: I believe you are right, you shouldn't have any issues with Count updates stepping on each other. However, you could still end up with the updated Count being read via get before the new element is put in the array.
Eric Rosenberger
...If that's an issue, I'd use data[Count] = item; Count++; and make Count volatile, which should prevent the first line from getting reordered to after the second line, and also ensure that the new value of Count is immediately visible to other threads.
Eric Rosenberger
@Eric: Even if a thread reads the updated Count before the new element is assigned, the read lock in the indexer will not be granted until the upgradeable lock exits, so I believe this shouldn't be a problem.
Hosam Aly
... Using volatile also makes much sense, but if ExitUpgradeableLock() causes a memory barrier then I guess I don't need it, and that's my last question: does it cause a memory barrier?
Hosam Aly
Valid point on the indexer. As for the memory barrier... I don't know for sure. However, marking Count volatile would also ensure that other threads always reread the value and not potentially miss an update -- if that is a concern -- which I don't think a memory barrier on the write would address.
Eric Rosenberger
+1  A: 

Jeffrey Richter's article provides some detailed advice on ReaderWriterLock.

Mitch Wheat
Thanks. I'll check it out.
Hosam Aly
This one was more informative regarding ReaderWriterLockSlim:http://www.bluebytesoftware.com/blog/2007/02/07/IntroducingTheNewReaderWriterLockSlimInOrcas.aspx
Hosam Aly
A: 

Yes it does, for a very in-depth overview of the various memory barrier cases, check that document out, you can find the non-fence'd lock's if you want also.

And please, DO NOT USE VOLITILE IT IS LESS AND LESS EFFECTIVE THESE DAYS!!

All of the standard Windows locking mechanisms (spin locks, mutexes, kernel events, and resources managed by the executive resource package) protect against processor reordering by inserting memory barriers where required in executable code.

A memory barrier is a processor instruction that preserves the ordering of read and/or write operations from the perspective of any other processor. Memory barriers include processor instructions with acquire, release, and fence semantics. These semantics describe the order in which results of an operation become visible.

  • Acquire semantics mean that the results of the operation are visible before the results of any operation that appears after it in code.
  • Release semantics mean that the results of the operation are visible after the results of any operation that appears before it in code.
  • Fence semantics combine acquire and release semantics. The results of an operation with fence semantics are visible before those of any operation that appears after it in code and after those of any operation that appears before it.
RandomNickName42