views:

2416

answers:

15

The CPU architecture landscape has changed, multiple cores is a trend that will change how we have to develop software. I've done multi-threaded development in C, C++ and Java, I've done multi-process development using various IPC mechanisms. Traditional approaches to using threads doesn't seem to make it easy, for the developer, to utilize hardware that supports a high degree of concurrency.

What languages, libraries and development techniques are you aware of that help alleviate the traditional challenges of creating concurrent applications? I'm obviously thinking of issues like deadlocks and race conditions - but I'm also interested in design techniques, libraries, tools, etc that help actually take advantage of and ensure that the available resources are being utilized - just writing a safe, robust threaded application doesn't ensure that it's using all the available cores.

What I've seen so far is:

  • Erlang: process based, message passing IPC, the 'actor's model of concurrnecy
  • Dramatis: actors model library for Ruby and Python
  • Scala: functional programming language for the JVM with some added concurrency support
  • Clojure: functional programming language for the JVM with an actors library
  • Termite: a port of Erlang's process approach and message passing to Scheme

What else do you know about, what has worked for you and what do you think is interesting to watch?

+7  A: 

You mentioned Java, but you only mention threads. Have you looked at Java's concurrent library? It comes bundled with Java 5 and above.

It's a very nice library containing ThreadPools, CopyOnWriteCollections to name a very few. Check out the documentation at the Java Tutorial. Or if you prefer, the Java docs.

Steve K
+2  A: 

I know of Reia - a language that is based on Erlang but looks more like Python/Ruby.

Jonathan
+3  A: 

I am keeping a close eye on Parallel Extensions for .NET and Parallel LINQ.

Ben Hoffstein
+5  A: 

I've used processing for Python. It mimicks the API of the threading module and is thus quite easy to use.

If you happen to use map/imap or a generator/list comprehension, converting your code to use processing is straightforward:

def do_something(x):
    return x**(x*x)

results = [do_something(n) for n in range(10000)]

can be parallelized with

import processing
pool = processing.Pool(processing.cpuCount())
results = pool.map(do_something, range(10000))

which will use however many processors you have to calculate the results. There are also lazy (Pool.imap) and asynchronous variants (Pool.map_async).

There is a queue class which implements Queue.Queue, and workers that are similar to threads.

Gotchas

processing is based on fork(), which has to be emulated on Windows. Objects are transferred via pickle/unpickle, so you have to make sure that this works. Forking a process that has acquired resources already might not be what you want (think database connections), but in general it works. It works so well that it has been added to Python 2.6 on the fast track (cf. PEP-317).

Torsten Marek
Thanks, this is helpful - sounds like ideas could be shared with Dramatis.
Kyle Burton
Would be nice to also mention it's descendant - 2.6's `multiprocessing` module.
Constantin
+3  A: 

Intel's Threading Building Blocks for C++ looks very interesting to me. It offers a much higher level of abstraction than raw threads. O'Reilly has a very nice book if you like dead tree documentation. See, also, Any experiences with Intel’s Threading Building Blocks?.

Pat Notz
+1  A: 

This question is closely related to, if not a duplicate of, What parallel programming model do you recommend today to take advantage of the manycore processors of tomorrow?

Daniel Papasian
That's a good thread - I didn't see it when I searched before posting.
Kyle Burton
I think the question you refer to is more theoretical, while this question asks for existing solution that can be used today rather than tomorrow. Just my 2¢.
Torsten Marek
+4  A: 

I would say:

Models: threads + shared state, actors + message passing, transactional memory, map/reduce? Languages: Erlang, Io, Scala, Clojure, Reia Libraries: Retlang, Jetlang, Kilim, Cilk++, fork/join, MPI, Kamaelia, Terracotta

I maintain a concurrency link blog about stuff like this (Erlang, Scala, Java threading, actor model, etc) and put up a couple links a day:

http://concurrency.tumblr.com

Alex Miller
One should note that Kamaelia isn't QUITE there in terms of multi-core support. It's still very experimental to say the least.
Jason Baker
+7  A: 

I'd suggest two paradigm shifts:

Software Transactional Memory

You may want to take a look at the concept of Software Transactional Memory (STM). The idea is to use optimistic concurrency: any operation that runs in parallel to others try to complete its job in an isolated transaction; if at some point another transaction has been committed that invalidates data on which this transaction is working, the transaction's work is throwed away and the transaction run again.

I think the first widely known implementation of the idea (if not the proof-of-concept and first one) is the one in Haskell : Papers and presentations about transactional memory in Haskell. Many other implementations are listed on Wikipedia's STM article.

Event loops and promises

Another very different way of dealing with concurrency is implemented in the E programming language.

Note that its way of dealing with concurrency, as well as other parts of the language design, is heavily based on the Actor model.

Nowhere man
+4  A: 

The question What parallel programming model do you recommend today to take advantage of the manycore processors of tomorrow? has already been asked. I gave the following answer there too.

Kamaelia is a python framework for building applications with lots of communicating processes.

Kamaelia - Concurrency made useful, fun

In Kamaelia you build systems from simple components that talk to each other. This speeds development, massively aids maintenance and also means you build naturally concurrent software. It's intended to be accessible by any developer, including novices. It also makes it fun :)

What sort of systems? Network servers, clients, desktop applications, pygame based games, transcode systems and pipelines, digital TV systems, spam eradicators, teaching tools, and a fair amount more :)

Here's a video from Pycon 2009. It starts by comparing Kamaelia to Twisted and Parallel Python and then gives a hands on demonstration of Kamaelia.

Easy Concurrency with Kamaelia - Part 1 (59:08)
Easy Concurrency with Kamaelia - Part 2 (18:15)

Sam Hasler
+2  A: 

Java has an actors library too you know. And did you know that Java is a functional language? ;)

Apocalisp
+1  A: 

OpenMP.

It handles threads for you so you only worry about which parts of your C++ application you want to run in parallel.

eg.

#pragma omp parallel for
for (int i=0; i < SIZE; i++) 
{
// do something with an element
}

the above code will run the for loop on as many threads as you've told the openmp runtime to use, so if SIZE is 100, and you have a quad-core box, that for loop will run 25 items on each core.

There are a few other parallel extensions for various languages, but the ones I'm most interested in are the ones that run on your graphics card. That's real parallel processing :) (examples: GPU++ and libSh)

gbjbaanb
+1  A: 

C++0x will provide std::lock functions for locking more than one mutex together. This will help alleviate deadlock due to out-of-order locking. Also, the C++0x thread library will have promises, futures and packaged tasks, which allow a thread to wait for the result of an operation performed on another thread without any user-level locks.

Anthony Williams
A: 

I began with .Net Parallel Extensions. However, it is a CTP and it is changing in each new realase. Now, I am using C# with the ThreadPool, BackgroundWorker and Thread instances. I am recoding a few critical processes in a medium size application. I didn't know where to start. However, I bought the e-book version of the book "C# 2008 and 2005 threaded programming", by Gaston C. Hillar - Packt Publishing - http://www.packtpub.com/beginners-guide-for-C-sharp-2008-and-2005-threaded-programming/book, 7 days ago. I bought the e-book from the publishers, but now the book is available at Amazon.com. Highly recommended for C# programmers. I downloaded the code and I began following the exercises. The book is a nice guide with a lot of code to practice. I read the first 6 chapters. It tells stories while it explains the most difficult concepts. That's good. It's nice to read. I could see my Core 2 Quad Q6700 reach 98% CPU usage programming in C# using 4 concurrent threads!! It is easier than I thought. I am impressed with the results you can achieve using many cores at the same time. I recommend the book to those who are interested in beginning with multicore or threaded programming using C#.

+2  A: 

I've been doing concurrent programming in Ada for nearly 20 years now.

The language itself (not some tacked on library) supports threading ("tasks"), multiple scheduling models, and multiple synchronization paradigms. You can even build your own synchronization schemes using the built in primitives.

You can think of Ada's rendezvous as sort of a procedural-oriented synchronization facility, while protected objects are more object-oriented. Rendezvous are similar to the old CS-concept of monitors, but much more powerful. Protected objects are special types with synchronization primitives that allow you to build things exactly like OS locks, semaphores, events, etc. However, it is powerful enough that you can also invent and create your own kinds of sync objects, depending on your exact needs.

T.E.D.