views:

381

answers:

7

I want to use a database table as a queue. I want to insert in it and take elements from it in the inserted order (FIFO). My main consideration is performance because I have thousands of these transactions each second. So I want to use a SQL query that gives me the first element without searching the whole table. I do not remove a row when I read it. Does SELECT TOP 1 ..... help here? Should I use any special indexes?

Thank you.

+3  A: 

Everything depends on your database engine/implementation.

For me simple queues on tables with following columns:

id / task / priority / date_added

usually works.

I used priority and task to group tasks and in case of doubled task i choosed the one with bigger priority.

And don't worry - for modern databases "thousands" is nothing special.

bluszcz
What are these?I use SQL Server 2008.
Shayan
I think you meant "indexes" in one of the places you said "tables" above (I'd fix it, but I'm not *100%* sure which one's the typo).
T.J. Crowder
sorry, there should be "columns"
bluszcz
indexes on all columns in my case - i used all of them to sort / lookup.
bluszcz
+5  A: 

If you do not remove your processed rows, then you are going to need some sort of flag that indicates that a row has already been processed.

Put an index on that flag, and on the column you are going to order by.

Partition your table over that flag, so the dequeued transactions are not clogging up your queries.

If you would really get 1.000 messages every second, that would result in 86.400.000 rows a day. You might want to think of some way to clean up old rows.

Peter Lang
What is a flag?
Shayan
By `flag` I mean some column to remember, if a row has already been processed by your client.
Peter Lang
I believe he meant that you can add a column to your tables - maybe Dequeued - that will hold the status of each transaction. Since you are not deleting the rows once you dequeue them, you should have a way to know what transactions to ignore. You can have this be a bit field, with 0 for queued and 1 for dequeued.
Waleed Al-Balooshi
... and then partition the table over that field, so the dequeued transactions are not clogging up your queries.
David Schmitt
@David Schmitt: I put your words into my answer as I found no better ones. Hope you don't mind...
Peter Lang
@peter: well done :-)
David Schmitt
+1  A: 

perhaps adding a LIMIT=1 to your select statement would help ... forcing the return after a single match...

Reed Debaets
What's the difference with TOP 1?
Shayan
I know that SQL Server can use the TOP 1 is the same thing as LIMIT 1 in postgres. I imagine all the other vendors would accept one or the other.
Matt
I'll be honest, I didn't realize they were equivalent to the same thing ... I've never use the TOP syntax, only the LIMIT ... this is why I love StackOverflow: Even in providing an answer, I learn something new.
Reed Debaets
+2  A: 

Create a clustered index over a date (or autoincrement) column. This will keep the rows in the table roughly in index order and allow fast index-based access when you ORDER BY the indexed column. Using TOP X (or LIMIT X, depending on your RDMBS) will then only retrieve the first x items from the index.

Performance warning: you should always review the execution plans of your queries (on real data) to verify that the optimizer doesn't do unexpected things. Also try to benchmark your queries (again on real data) to be able to make informed decisions.

David Schmitt
+2  A: 

This will not be any trouble at all as long as you use something to keep track of the datetime of the insert. See here for the mysql options. The question is whether you only ever need the absolute most recently submitted item or whether you need to iterate. If you need to iterate, then what you need to do is grab a chunk with an ORDER BY statement, loop through, and remember the last datetime so that you can use that when you grab your next chunk.

David Berger
+2  A: 

Since you don't delete the records from the table, you need to have a composite index on (processed, id), where processed is the column that indicates if the current record had been processed.

The best thing would be creating a partitioned table for your records and make the PROCESSED field the partitioning key. This way, you can keep three or more local indexes.

However, if you always process the records in id order, and have only two states, updating the record would mean just taking the record from the first leaf of the index and appending it to the last leaf

The currently processed record would always have the least id of all unprocessed records and the greatest id of all processed records.

Quassnoi
I'd like to keep processes field in a different table with foreign key to this table to minimize locking effect of different parts of program.
Shayan
`@Shayan`: this will severely impact your select performance. And you need to lock the field while processing anyway.
Quassnoi
+2  A: 

I'd use an IDENTITY field as the primary key to provide the uniquely incrementing ID for each queued item, and stick a clustered index on it. This would represent the order in which the items were queued.

To keep the items in the queue table while you process them, you'd need a "status" field to indicate the current status of a particular item (e.g. 0=waiting, 1=being processed, 2=processed). This is needed to prevent an item be processed twice.

When processing items in the queue, you'd need to find the next item in the table NOT currently being processed. This would need to be in such a way so as to prevent multiple processes picking up the same item to process at the same time as demonstrated below. Note the table hints UPDLOCK and READPAST which you should be aware of when implementing queues.

e.g. within a sproc, something like this:

DECLARE @NextID INTEGER

BEGIN TRANSACTION

-- Find the next queued item that is waiting to be processed
SELECT TOP 1 @NextID = ID
FROM MyQueueTable WITH (UPDLOCK, READPAST)
WHERE StateField = 0
ORDER BY ID ASC

-- if we've found one, mark it as being processed
IF @NextId IS NOT NULL
    UPDATE MyQueueTable SET Status = 1 WHERE ID = @NextId

COMMIT TRANSACTION

-- If we've got an item from the queue, return to whatever is going to process it
IF @NextId IS NOT NULL
    SELECT * FROM MyQueueTable WHERE ID = @NextID

If processing an item fails, do you want to be able to try it again later? If so, you'll need to either reset the status back to 0 or something. That will require more thought.

Alternatively, don't use a database table as a queue, but something like MSMQ - just thought I'd throw that in the mix!

AdaTheDev
Why should I seperate select id from select *?
Shayan
You don't have to, you could load all the values that you need into variables at the same time as the first SELECT, and then return them at the end. Also, I've done "SELECT *" for simplicity - just return the fields you actually need.
AdaTheDev
I'd like to keep processes field in a different table with foreign key to this table to minimize locking effect of different parts of program. Does this method help? What kind of index should I use for it?
Shayan
You could use the queue table as just a mechanism for queueing, and store more detail on the specifics of what to process in a related table away from the central queue table. That approach can work nicely especially if the fields you split out are to be updated during processing. Can also be nice if you have different types (schema) of messages in the queue.
AdaTheDev