tags:

views:

85

answers:

1

Does anyone have experience with KODO JDO's distributed cache mechanism? I would like to know:

1) what is the latency like between distributed cache updates (so if two users are hitting two separate caches i.e. on two different JVMs and are using the same data and one makes an update, when will the other user, using the other cache, see the update?)

2) how much data will be transferred between JVMs? if an update is made to one cache, does it simply tell the other caches to drop the objects by telling it the primary keys of the objects to flush? (concern is the network traffic/overhead of managing the distributed cache)

3) when you have external feeds updating your database throughout the day (i.e. not coming in through your application), how easy is it to externally invoke a cache flush?

Our application runs in a Weblogic cluster of 12 JVMS and we are considering enabling the distributed cache to help with performance coming from large object graphs being pulled from our database -- which are currently not cached-- but would like to know some real-world experience with #1,2,and 3. Thanks.

A: 

This is a partial answer, but I believe still helpful:

"When used in conjunction with a kodo.event.RemoteCommitProvider, commit information is communicated to other JVMs via JMS or TCP, and remote caches are invalidated based on this information." (From http://bit.ly/8NP5qE)

It is not stated whether this means that this commit is included as part of the original transaction (one would hope) or and/or what the lag time or overhead is with this operation and how well it scales (e.g. how does it perform if you're coordinating 15 JVMs and you have multiple users updating the same data)

stacked