I was wondering if some of you who are experienced in concurrency programming could help me interpret a statement/philosophy properly.
I have a copy of Bruce Eckel's grand tome Thinking In Java (4th ed) which has some fairly good coverage of a number of areas of Java which are kind of difficult for beginners to get into. I really enjoyed the chapters on classes, generics, and annotations -- they cleared up a number of questions in my mind.
But then I got most of the way through Eckel's 193-page chapter on concurrency (after reading the excellent Java Concurrency in Practice) and got to this bit: (p. 1278)
Fast forward to the sixth printing of the book, and most new machines have at least two cores on them, as did the machine I was using. And I was surprised when it [one of the programs he wrote for this chapter --J.S.] broke, but that's one of the problems. It's not Java's fault; "write once, run everywhere" cannot possibly extend to threading on single vs. multicore machines. It's a fundamental problem with threading. You can actually discover some threading problems on a single-CPU machine, but there are other problems that will not appear until you try it on a multi-CPU machine, where your threads are actually running in parallel.
And most important: you can never let yourself become too confident about your programming abilities when it comes to shared-memory concurrency. I would not be surprised if, sometime in the future, someone comes up with a proof to show that shared-memory concurrency programming is only possible in theory, but not in practice. It's the position I've adopted.
WTF? Am I missing something? Does Java (+ other languages/OS calls, for that matter) not offer tools to provide sufficient guarantees of concurrency-correctness to run real-world applications? This is not just a rhetorical question, I'm wondering if there's some good documentation on how to know if you're handling something "correctly" or whether there is "fine print" that causes a concurrency guarantee to be broken.
for example: if you use a synchronized
keyword on a class's method, does that really guarantee that only one thread at a time can be executing code within that method on a particular object? Or is there a gotcha if I use a multicore CPU?
I've read (but not fully understood) a number of technical papers on semaphores, concurrent linked lists, etc. and in most cases it looks like they have taken great pains to rigorously prove correctness of concurrency primitives... but now I'm not sure how to deal with this stuff. (My fallback position is to ignore this chapter of TIJ and reread Java Concurrency in Practice enough times that it makes sense to me again.)