views:

77

answers:

6

Given a table that is acting as a queue, how can I best configure the table/queries so that multiple clients process from the queue concurrently?

For example, the table below indicates a command that a worker must process. When the worker is done, it will set the processed value to true.

| ID | COMMAND | PROCESSED |
|  1 | ...     | true      |
|  2 | ...     | false     |
|  3 | ...     | false     |

The clients might obtain one command to work on like so:

select top 1 COMMAND 
from EXAMPLE_TABLE 
with (UPDLOCK, ROWLOCK) 
where PROCESSED=false;

However, if there are multiple workers, each tries to get the row with ID=2. Only the first will get the pessimistic lock, the rest will wait. Then one of them will get row 3, etc.

What query/configuration would allow each worker client to get a different row each and work on them concurrently?

EDIT:

Several answers suggest variations on using the table itself to record an in-process state. I thought that this would not be possible within a single transaction. (i.e., what's the point of updating the state if no other worker will see it until the txn is committed?) Perhaps the suggestion is:

# start transaction
update to 'processing'
# end transaction
# start transaction
process the command
update to 'processed'
# end transaction

Is this the way people usually approach this problem? It seems to me that the problem would be better handled by the DB, if possible.

A: 

Rather than using a boolean value for Processed you could use an int to define the state of the command:

1 = not processed
2 = in progress
3 = complete

Each worker would then get the next row with Processed = 1, update Processed to 2 then begin work. When work in complete Processed is updated to 3. This approach would also allow for extension of other Processed outcomes, for example rather than just defining that a worker is complet you may add new statuses for 'Completed Succesfully' and 'Completed with Errors'

Macros
Thank you. Please see my edit. Am I mistaken about your suggestion?
Synesso
You are correct, you will need separate transactions to allow the other workers to see the update which should be the default behaviour - why would you keep a transaction open whilst the worker processes the command? I can see the parity in that a worker itself is essentially a transaction but this is almost certainly better coded yourself than using Sql Server transactions
Macros
A: 

Probably the better option will be use a trisSate processed column along with a version/timestamp column. The three values in the processed column will then indicate indicates if the row is under processing, processed or unprocessed.

For example

    CREATE TABLE Queue ID INT NOT NULL PRIMARY KEY,
    Command NVARCHAR(100), 
    Processed INT NOT NULL CHECK (Processed in (0,1,2) ), 
    Version timestamp)

You grab the top 1 unprocessed row, set the status to underprocessing and set the status back to processed when things are done. Base your update status on the Version and the primary key columns. If the update fails then someone has already been there.

You might want to add a client identifier as well, so that if the client dies while processing it up, it can restart, look at the last row and then start from where it was.

no_one
Thanks. Please see my edit. Also, for continuous availability, I would want any available client to resume a failed job - not just the one that failed.
Synesso
A: 

I would stay away from messing with locks in a table. Just create two extra columns like IsProcessing (bit/boolean) and ProcessingStarted (datetime). When a worker crashes or doesn't update his row after a timeout you can have another worker try to process the data.

ZippyV
Thanks. Please see my edit. Does this solution require the initial update to be outside of the main transaction?
Synesso
Why do you want to use a transaction?
ZippyV
A: 

One way is to mark the row with a single update statement. If you read the status in the where clause and change it in the set clause, no other process can come in between, because the row will be locked. For example:

declare @pickup_id int
set @pickup_id = 1

set rowcount 1

update  YourTable
set     status = 'picked up'
,       @pickup_id = id
where   status = 'new'

set rowcount 0

return @pickup_id

This uses rowcount to update one row at most. If no row was found, @pickup_id will be -1.

Andomar
+4  A: 

My answer here shows you how to use tables as queues... http://stackoverflow.com/questions/939831/sql-server-process-queue-race-condition/940001#940001

You basically need "ROWLOCK, READPAST, UPDLOCK" hints

gbn
The locking hints are useful for protecting changes between a select and an update. They're not required or useful if you pop the queue with a single update statement
Andomar
@Andomar: this is what is needed: 100% safe concurrency for readers and writers...
gbn
+4  A: 

I recommend you go over Using tables as Queues. Properly implemented queues can handle thousands of concurrent users and service as high as 1/2 Million enqueue/dequeue operations per minute. Until SQL Server 2005 the solution was cumbersome and involved a mixing a SELECT and an UPDATE in a single transaction and give just the right mix of lock hints, as in the article linked by gbn. Luckly since SQL Server 2005 with the advent of the OUTPUT clause, a much more elegant solution is available, and now MSDN recommends using the OUTPUT clause:

You can use OUTPUT in applications that use tables as queues, or to hold intermediate result sets. That is, the application is constantly adding or removing rows from the table

Basically there are 3 parts of the puzzle you need to get right in order for this to work in a highly concurrent manner:

1) You need to dequeue atomically. You have to find the row, skipp any locked rows, and mark it as 'dequeued' in a single, atomic operation, and this is where the OUTPUT clause comes into play:

with CTE as (
  SELECT TOP(1) COMMAND, PROCESSED
  FROM TABLE WITH (READPAST)
  WHERE PROCESSED = 0)
UPDATE CTE
  SET PROCESSED = 1
  OUTPUT INSERTED.*;

2) You must structure your table with the leftmost clustered index key on the PROCESSED column. If the ID was used a primary key, then move it as the second column in the clustered key. The debate whether to keep a non-clustered key on the ID column is open, but I strongly favor not having any secondary non-clustered indexes over queues:

CREATE CLUSTERED INDEX cdxTable on TABLE(PROCESSED, ID);

3) You must not query this table by any other means but by Dequeue. Trying to do Peek operations or trying to use the table both as a Queue and as a store will very likely lead to deadlocks and will slow down throughput dramatically.

The combination of atomic dequeue, READPAST hint at searching elements to dequeue and leftmost key on the clustered index based on the processing bit ensure a very high throughput under a highly concurrent load.

Remus Rusanu