A: 

I'm not familiar with their (Terracotta) implementation, but from a JMM standpoint, it should take a cluster-wide lock. However, this example is very simple; just a change of a reference, and that may cause it to be converted into something that is more like a volatile write, and completely avoid locking.

But, if you do non-trivial stuff in your synchronized block, then I would assume that TC pessimistically takes a cluser-wide lock at the start of the synchronized block. If they didn't, they would be at odds with the JMM spec. as I understand it.

In other words, your option #1. So, be careful what you share in the cluster, and use immutable objects and java.util.concurrent.* data structures when you can - the latter is getting special intrinsic love in TC.

Christian Vest Hansen
+2  A: 

Assume that everyone else has a reference to your object and can touch it while/before/after you do. Thus the solution would be to add locks, and

  • obtain lock
  • modify the object
  • release lock

And that's exactly what synchronized does... it creates a queue and the synchronized method can't be called more than once... but the underlying object might be touched if it's referenced somewhere.

see:

Achille
This is all true but it doesn't actually answer the question at all?
Alex Miller
+4  A: 

The answer is not really 1 or 2. Objects are striped across the server mirror groups. The first time this field is set, a transaction is created and that mirror group chosen for that first transaction will "own" the object after that.

With respect to both 1 and 2, not all active server groups need to be updated so there is no need to to wait for either of those conditions.

You can find more info at the Terracotta documentation about configuring the Terracotta server array:

From a locking point of view, the clustered lock on this Person object would be held (mutual exclusion across the cluster) while performing the object modification. The scope of the synchronized block forms the transaction mentioned above. In the getObj() method, you could configure this as a read lock which would allow multiple concurrent readers across the cluster.

Alex Miller