views:

28

answers:

1

In a web service that I am working on, a user's data needs to be updated in the background - for example pulling down and storing their tweets. As there may be multiple servers performing these updates, I want to ensure that only one can update any single user's data at one time. Therefore, (I believe) I need a method of doing an atomic read (is the user already being updated) and write (no? Then I am going to start updating). What I need to avoid is this:

  1. Server 1 sends request to see if user is being updated.
  2. Server 2 sends request to see if user is being updated.
  3. Server 1 receives response back saying the user is not being updated.
  4. Server 2 receives response back saying the user is not being updated.
  5. Server 1 starts downloading tweets.
  6. Server 2 starts downloading the same set of tweets.
  7. Madness!!!

Steps 1 and 3 need to be combined into an atomic read+write operation so that Step 2 would have to wait until Step 3 had completed before a response was given. Is there a simple mechanism for effectively providing a "lock" around access to something across multiple servers, similar to the synchronized keyword in Java (but obviously distributed across all servers)?

A: 

Take a loot at Dekker's algorithm, it might give you an idea.

http://en.wikipedia.org/wiki/Dekker%27s_algorithm

Nick