views:

264

answers:

3

There is at least three well-known approaches for creating concurrent applications:

  1. Multithreading and memory synchronization through locking(.NET, Java). Software Transactional Memory (link text) is another approach to synchronization.

  2. Asynchronous message passing (Erlang).

I would like to learn if there are other approaches and discuss various pros and cons of these approaches applied to large distributed applications. My main focus is on simplifying life of the programmer.

For example, in my opinion, using multiple threads is easy when there is no dependencies between them, which is pretty rare. In all other cases thread synchronization code becomes quite cumbersome and hard to debug and reason about.

+5  A: 

Read Herb Sutter's Effective Concurrency column, and you too will be enlightened.

Frédéric Hamidi
Great find. While I appreciate how deeply he digs into thread synchronization he is mostly focused on threading, locking, and atomic operations, which is just one of many approaches to concurrency. He mentions asynchronous messaging in a couple of articles, but IMHO doesn't go very far. Also, his articles show how hard it is to get multi-threading right. I am still having nightmares about various memory models.
Serge
Yeah, Sutters columns have a definite focus on making traditional lock-based synchronization more manageable, rather than exploring *alternative* techniques. Still a good read though.
jalf
+5  A: 

I'd strongly recommend looking at this presentation by Rich Hickey. It describes an approach to building high performance, concurrent applications which I would argue is distinct from lock-based or message-passing designs.

Basically it emphasises:

  • Lock free, multi-threaded concurrent applications
  • Immutable persistent data structures
  • Changes in state handled by Software Transactional Memory

And talks about how these principles influenced the design of the Clojure language.

mikera
+1  A: 

With the Java 5 concurrency API, doing concurrent programming in Java doesn't have to be cumbersome and difficult as long as you take advantage of the high-level utilities and use them correctly. I found the book, Java Concurrency in Practice by Brian Goetz, to be an excellent read about this subject. At my last job, I used the techniques from this book to make some image processing algorithms scale to multiple CPUs and to pipeline CPU and disk bound tasks. I found it to be a great experience and we got excellent results.

Or if you are using C++ you could try OpenMP, which uses #pragma directives to make loops parallel, although I've never used it myself.

balexand
Note that parallel and concurrent are not synonyms. tbb also provides `parallel_for`, `parallel_map` and `parallel_reduce` but does not help with concurrency. Similarly, there have been languages such as newsqueak that were concurrency oriented, but didn't provide parallel processing.
Dustin
+1, good book and advice - it's surprisingly rare that you *need* to write fiddly, complex thread-safe code using low-level primitives in Java nowadays. Reviewing and reasoning about code that uses these newer APIs is much easier, too.
SimonJ
Microsoft released TPL with .NET 4.0, which is a set of higher-level abstractions over threads similar to Java's. While it does simplify coding of embarrassingly parallel problems it doesn't help much with managing shared state and rather gives you a false impression of simplicity.
Serge