views:

77

answers:

2

Hello,

we are currently designing a multithreaded server application.

To optimize performance, we decided to implement a ReadWriteLock, i.e. multiple threads can get a lock if they only want to read but just one thread may hold the write lock.

This lock is used with a list and iterating over the list is the "read" operation.

Now this change from simple mutex did in fact increase performance but only to a certain amout of concurrency. If there are more threads, the ones who wait for the write lock, starve because before an iterator unlocks, another iterator often already locks.

Any ideas / default approach to provide more fairness to the threads wanting to change the list while still getting a better performance?

+1  A: 

The solution to this is generally MVCC if your problem allows for it. MVCC would allow the readers to continue to read an old version, while the writer could update by creating a new version.

The problem you describe of having to wait for all the readers to exit before the writer can lock is common. If you can't version your data, then consider reducing the size of the lock. An example of this is the ConcurrentHashMap in Java. Internally, it defaults to creating 16 locks, allowing for parts of the map to be locked without locking the entire object.

One last idea is to try to remove the ReadWriteLock altogether, and use a lock-free algorithm. This can be the most challenging, and may reduce multi-core concurrency.

brianegge
Hi, we thought about something like MVCC but couldn't name it. Thank you. The hash map strategy seams more easy if you have a hash space, not with an ordered list like we use now. Optimistic Lock-free algorithms are an option, I agree, but as you said we wanted to avoid the challenging implementation.
Tarnschaf
+1  A: 

Reading the OP, I get the impression there's no queuing of read and write lock requests?

The normal approach is to queue lock requests and service them in-order.

So imagine a write lock is held and a bunch of lock requests (both read and write) come in.

You'll then have a queue of requests something like this;

RRRRRWRRRWRWRWWRRRRR

You can grant read requests up to the first write. You wait till the reads finish, then let one writer in. When he's done, you then again can grant reads up to the next write. When a write finishes and the next request is a write, you can of course only grant to that single write.

I have to say, this is fundamental stuff - I learned this second year on my degree. I'm concerned that you've implemented your current solution as you have. You need to read a decent basic text on locking.

Blank Xavier
Thank you, we actually were using a queue but not with the additional logic you describe that doesn't proceed until the write could start. I also proposed a solution like yours but we are happy with our current solution: Each write is done in sort of a "backbuffer". Each read gets the "stable" version of the data. As soon as a write finishs, a (new) reader gets the new version.
Tarnschaf
Why were you using a non-ordered queue? if you have a single lock, like a mutex, the OS will automatically queue all the outstanding requests.
Blank Xavier