I have a queue in which I can enqueue different threads, so I can assure two things:
- Request are processed one by one.
- Request are processed in the arriving order
Second point is important. Otherwise a simple critical section would be enough. I have different groups of requests and only inside a single group these points must be fulfilled. Requests from different groups can run concurrent.
It looks like this:
FTaskQueue.Enqueu('MyGroup');
try
Do Something (running in context of some thread)
finally
FTaskQueue.Dequeu('MyGroup');
end;
EDIT: I have removed the actual implementation because it hides the problem I want to solve
I need this because I have an Indy based web server that accepts http requests. First I find a coresponding session for the request. Then the request (code) is executed for that session. I can get multiple requests for the same session (read I can get new requests while the first is still processing) and they must execute one by one in correct order of arrival. So I seek a generic synchronization queue that can be use in such situations so requests can be queued. I have no control over the threads and each request may be executed in a different thread.
What is best (ususal) approach to this sort of problem? The problem is that Enqueue and Dequeue must be atomic opeations so that correct order is preserverd. My current implementation has a substantial bottleneck, but it works.
EDIT: Bellow is the problem of atomic Enqueue / Dequeue operations
You wold normaly do something like this:
procedure Enqueue;
begin
EnterCriticalSection(FCritSec);
try
DoEnqueue;
finally
LeaveCriticalSection(FCritSec);
end;
BlockTheCurrentThread; // here the thread blocks itself
end;
procedure Dequeue;
begin
EnterCriticalSection(FCritSec);
try
DoDequeue;
UnblockTheNextThread; // here the thread unblocks another thread
finally
LeaveCriticalSection(FCritSec);
end;
end;
Now the problem here is that this is not atomic. If you have one thread already in the queue and another one comes and calls Enqueue, it can happen, that the second thread will just leave the critical section and try to block itself. Now the thread scheduler will resume the first thread, which will try to unblock the next (second) thread. But second thread is not blocked yet, so nothing happens. Now the second thread continues and blocks itself, but that is not correct because it will not be unblocked. If blocking is inside critical section, that the critical section is never leaved and we have a deadlock.