I have a requirement that a single JMS message sent by a client must be delivered reliably (exactly-once) to two systems. These 2 systems are not HA-enabled, so the best suggestion that I came up with is to:
create single queue where client posts to
set up two "intermediate" queues
use a custom "DuplicatorMDB" that will read messages from the client queue and post them to two queues within the same transaction.
client->JMSDQ->DuplicatorMDB->Q1->MDB->System1 \->Q2->MDB->System2
Is there any existing functionality like that? What would be the proper way to balance the system to keep it stable if one or both of the backend systems are down?
The application server is WebLogic 10.
I can't use topics for this because in a cluster topics will cause too much message duplication. If we have 2 instances, then with topics it'll go like this:
client->Topic-->MDB1@server1->System1 | \->MDB2@server1->System2 \---->MDB1@server2->System1 \--->MDB2@server2->System2
Thus every message will be delivered twice to System1 and twice to System2 and if there'll be 8 servers in a cluster, each message will be delivered 8 times. This is what I'd really like to avoid...
Finally I got some time to test it and here is what I observed: 2 nodes in a cluster. 2 JMS servers: jms1 on node1, jms2 on node2. Distributed topic dt. MDB with durable subscription and jms-client-id=durableSubscriber. Started the system: 0 messages, mdb@node1 is up, mdb@node2 trying to connect periodically, but it can't because "Client id, durableSubscriber, is in use". As expected.
Sent in 100 messages:
jms1@dt messages current = 0, messages total = 100, consumers current = 1
I can see that node1 processed 100 messages.
jms2@dt messages current = 100, messages total = 100 , consumers current = 1
i.e. "duplicate" messages are pending in the topic.
Sent in another 100 messages, 100 were processed on the node1, 200 pending on node2.
Rebooted node1, mdb@node2 reconnected to dt and started processing "pending" messages. 200 messages were processed on node2.
After node1 is up, mdb@node1 can't connect to the dt, while mdb@node2 is connected.
jms1@dt messages current=0, messages total = 0, consumers current = 0
jms2@dt messages current=0, messages total = 200, consumers current = 1
Send in 100 more messages, I see that all 100 messages are processed on node2 and discarded on node1.
jms1@dt messages current=0, messages total = 100, consumers current = 0
jms2@dt messages current=0, messages total = 300, consumers current = 1
Now I reboot node2, mdb@node1 reconnects to dt. After reboot mdb@node2 reconnects to dt and mdb@node1 gets disconnected from dt.
jms1@dt messages current=0, messages total = 100, consumers current = 1
jms2@dt messages current=0, messages total = 0, consumers current = 1
I send in 100 messages, all are processed on node2 and stored in the topic on node1:
jms1@dt messages current=100, messages total = 200, consumers current = 1
jms2@dt messages current=0, messages total = 0, consumers current = 1
Then I shut down node2 and I see 100 "pending messages" being processed on node1 after mdb@node1 reconnects to the topic.
So the result is: I sent 400 messages, 700 were processed by MDB out of which 300 were duplicates.
It looks like the MDB reconnection works good as expected, but the messages may be duplicated if a node hosting the "active" MDB goes down.
This might be a bug or a feature of weblogic JMS implementation.