Sure, you can use it, but it'll only trigger if all of the semaphores are triggered at once. Depending on how the rest of your application is structured, there may indeed be starvation issues. For example, if you have two resources, A and B, and three threads:
- Continually take resource A, work with it for one second, then release it and loop
- Continually take resource B, work with it for one second, then release it and loop
- Wait for both A and B to be available
You can easily wait potentially forever for both A and B to be available simultaneously.
Depending on your application, it might be better to simply take each semaphore in order, which avoids this starvation issue but introduces the traditional deadlocking issues. However, if you're sure that these locks will be available most of the time it may be safe (but might also be a ticking time bomb just waiting for your application to be under real load...)
Given your sample code, another option would be to create a global ordering over the semaphores - say, an ordering by name - and always make sure to acquire them in that order. If you do that, you can perform a multi-lock simply by locking each semaphore one-by-one in ascending order.
In this case, the order of release doesn't strictly matter - but if you release out of order, you should release all locks 'after' the lock you just released before acquiring any more (this is a rule of thumb that should give you deadlock safety. it may be possible to relax this further with detailed analysis). The recommended way is to just release in reverse order of acquisition where possible, in which case you can parley it into further acquisition at any point. For example:
- Acquire lock A
- Acquire lock B
- Acquire lock C
- Release lock C
- Acquire lock D
- Release B (now don't acquire anything until you release D!)
- Release D
- Acquire E
- Release E
- Release A
As long as everything follows these rules, deadlock should not be possible, as cycles of waiters cannot form.
The downside of this approach is it may delay other threads by holding a lock while waiting for another. This won't last forever; in the above example of three threads we may have this scenario for example:
- At start, thread 2 holds B. Thread 1 holds A.
- Thread 3 blocks on A.
- (time passes)
- Thread 1 releases A.
- Thread 3 locks A, blocks on B.
- Thread 1 blocks on A.
- (time passes)
- Thread 2 releases B.
- Thread 3 locks B, does work, then unlocks.
- Thread 1 locks A, makes progress.
As you can see, there was some downtime in which Thread 1 was blocked on A even though there was no real work to be done. However, by doing this we have greatly improved the chances for Thread 3 to make progress at all.
Whether this is a good trade-off will depend on your application - if you can definitively say that multiple threads never enter the lock, it might not even matter. But there's no one right way :)