views:

149

answers:

4

I'm looking for opinion from you all. I have a web application that need to records data into another web application database. I not prefer to use HTTP request GET on 2nd application because of latency issue. I looking for fast way to save records on 2nd application quickly, I came across the idea of "fire and forget" , will JMS suit for this scenario? from my understanding JMS will guarantee message delivery, guarantee whether message will be 100% deliver is not important as long as can serve as many requests as possible. Let say I need to call at least 1000 random requests per seconds to 2nd application should I use JMS? HTTP request? or XMPP instead?

+2  A: 

I think you're misunderstanding networking in general. There's positively no reason that a HTTP GET would have to be any slower than anything else, and if HTTP takes advantage of keep alives it's faster that most options.

JMX isn't a protocol, it's a specification that wraps many other protocols including, possibly, HTTP or XMPP.

In the end, at the levels where Java will operate, there's either UDP or TCP. TCP has more overhead by guarantees delivery (via retransmission) and ordering. UDP offers neither guaranteed delivery nor in-order delivery. If you can deal with UDP's limitations you'll find it "faster", and if you can't then any lightweight TCP wrapper (of which HTTP is one) is just about the same.

Ry4an
my question is when i enable keeplive on the server side. do i need to specifically change my programming on client side to use this feature (open/close HttpConnection.GET) ?
cometta
I think that the OP meant JMS, not JMX
Pascal Thivent
@pascal, you're right, though I stick my my analysis. Either you're sending it TCP or UDP and that's the only decision that makes any difference. Within TCP or UDP you get all the same behaviors depending on configuration.
Ry4an
@cometta Java has keepalives on by default on the client side, and most any server solution you'd be using does too, so you probably don't need to do anything. You can verify this with something like wireshark/ethereal where you'll notice that multiple request/response pairs go over the same TCP socket.I do think that Java's default behavior used to be to only re-use a TCP connection for N request/response pairs, where N was something like 10, but that was tuneable via property.
Ry4an
+2  A: 

Your requirements seem to be:

  • one client and one server (inferred from your first sentence),
  • HTTP is mandatory (inferred from your talking about a web application database),
  • 1000 or more record updates per second, and
  • individual updates do not need to be acknowledged synchronously (you are willing to use "fire and forget" approach.

The way I would approach this is to have the client threads queue the updates internally, and implement a client thread that periodically assembles queued updates into one HTTP request and sends it to the server. If necessary, the server can send a response that indicates the status for individual updates.

Batching eliminates the impact of latency on the client, and potentially allows the server to process the updates more efficiently.

Stephen C
+1  A: 

The big difference between HTTP and JMS or XMPP is that JMS and XMPP allow asynchronous fire and forget messaging (where the client does not really know when and if a message will reach its destination and does not expect a response or an acknowledgment from the receiver). This would allow the first app to respond fast regardless of the second application processing time.

Asynchronous messaging is usually preferred for high-volume distributed messaging where the message consumers are slower than the producers. I can't say if this is exactly your case here.

Pascal Thivent
i agreed with you. but from other comments, http.get keepalive is faster?
cometta
my scenario is just like what you descripted. multiple producers, one slow consumer(slower)
cometta
@cometta I'd like to read @Stephen and @Ry4an point of views on this but 1. with HTTP, the client will have to wait until the end of the processing. 2. If the processing time is much bigger than the "network" time, then the later doesn't really matter 3. JMS implementations might use UDP.
Pascal Thivent
A: 

If you have full control and the two web applications run in the same web container and hence in the same JVM, I would suggest using JNDI to allow both web applications to get access to a common data structure (a list?) which allows concurrent modification, namely to allow application A to add new entries and application B to consume the oldest entries simultaneously.

This is most likely the fastest way possible.

Note, that you should keep the information you put in the list to classes found in the JRE, or you will most likely run into class cast exceptions. These can be circumvented, but the easiest is most likely to just transfer strings in the common data structure.

Thorbjørn Ravn Andersen
is in different jvm. so i cant do this.
cometta
if so, then you will need some interprocess connection. http will most likely be the least complex. You can consider using a servlet container suitable for LOTS of connections per second, or consider bundling your transmissions like others have suggested.
Thorbjørn Ravn Andersen