views:

86

answers:

3

What is the most efficient way of implementing queues to be read by another thread/process?

I'm thinking of using a basic MySQL table with polling on sleep. This sounds to be the most scalable (it doesn't even have to be on the same server) but might potentially result in too many queries to the DB.

+2  A: 

This is one of those things that is simple to write yourself to your exact specifications. I wrote a toy one here:

http://github.com/jrockway/app-queue

I am not sure it compiles anymore, as AnyEvent::Subprocess has changed significantly since I wrote it. But you can steal the ideas.

Basically, I think an RPC-style infrastructure is the best. You have a server that handles keeping the data. Then clients connect and add data or remove data via RPC calls. This gives you ultimate flexibility with the semantics. You can be "transactional" so that if a client takes data and then never says "hey, I am done with it", you can assume the client died and give the job to another client. You can also ensure that each job is only run once.

Anyway, making a queue work with a relational database table involves a bit of effort. You should use something KiokuDB for the persistence. (You can physically store the data in MySQL if you desire, but this provides a nicer Perl API to that.)

jrockway
+3  A: 

You have several options, and it really depends on what you are trying to get the system to do.

  • fork child processes, and interface using connections their stdin/stdout pipes.
  • create a named pipe on the file system, like /tmp/mysql.sock. This is basically using sockets to communicate cross process.
  • Setup a message broker. I'd recommend giving ActiveMQ a try, and using the Stomp client for Perl. This is probably your most scalable solution.
brianegge
+2  A: 

In PostgreSQL you could use the NOTIFY/LISTEN combination, would need only a wait on the PG connection socket after running LISTEN(s).

MkV