views:

426

answers:

3

I have been researching the Java Memory Model all day today, in order to understand in detail the problems with the JMM pre-Java 5 and the changes made by JSR-133 implemented in Java 5.

What I cannot seem to find a definitive answer on is the scope of cache invalidation and flushing required on a particular synchronize.

Must all CPU registers and caches be invalidated when entering any synchronized portion of code and all flushed to main RAM when leaving, or is the JVM allowed to only invalidate those variables actually read, and flush only those actually written during the synchronized block of code?

If the former, why is the JMM so pedantic about insisting the that memory barrier only occurs between two threads which synchronize on exactly the same object?

If the latter, is there any good document that explains the details of how this is accomplished? (I would suppose the underlying implementation would have to set a "bypass caches" flag at the CPU level at the start of a synchronized block and clear it at the end, but I could be way off base.)

A: 

You need to understand that the pre-5.0 JMM was never really implemented exactly, because it wasn't actually feasible.

So pre-5.0 you did technically have to write everything out to shared memory. In 1.5 (actually 1.4) this is relaxed. In particular, if a lock cannot escape a thread, then the JVM is entitled to treat it as a nop. Further, an unlock followed by a lock of the same lock can be coalesced, which is not true of the old JMM. For an escaped lock, the JVM often has to be pessimistic and flush more than is technically necessary.

Tom Hawtin - tackline
Thanks Tom; but I am actually primarily interested in the scope of invalidating and flushing memory caches.
Software Monkey
A: 

I'd suggest you start with:

Neil Coffey
On the JMM mailing list: Crap! That's a lot of reading!
Software Monkey
sorry, a stray character got into the link -- fixed now
Neil Coffey
p.s. the mailing list is searchable!
Neil Coffey
Oh, and yes, whatever you do, it's a lot of reading, because this is quite complicated stuff. That's why you have JVMs and program in Java rather than assembler, so that in general, you don't have to worry too much about all of this nitty-gritty (though I confess I'm nerdy enough to also find it interesting).
Neil Coffey
working link for JSR-133 cookbook: http://g.oswego.edu/dl/jmm/cookbook.html
QuickRecipesOnSymbianOS
+2  A: 

There is very nice tech talk on Java Memory model. If you dislike wideos google 'happens before' in context of Java Memory Model.

Basically all writes are visible to other threads if there is happens before relationship, Lets assume thaat thread A writes to field X, and thread B reads from it, so happens before is establishend between write and read if:

  • x is volatile
  • write to x was in guarded by the same lock that read from x
  • maybe some more.

So I think that the second option is true, how they implemented it, I dont know.

jb
+1 - that tech-talk was quite informative.
Software Monkey