views:

684

answers:

3

Is parallel programming == multithread programming?

+14  A: 

Not necessarily. You can distribute jobs between multiple processes and even multiple machines - I wouldn't class that as "multi-threaded" programming as each process may only use a single thread, but it's certainly parallel programming. Admittedly you could then argue that with multiple processes there are multiple threads within the system as a whole...

Ultimately, definitions like this are only useful within a context. In your particular case, what difference is it going to make? Or is this just out of interest?

Jon Skeet
Should we also consider SIMD to be parallel programming? We're performing the same operations on multiple data in parallel, but I don't know if this is considered to much a micro-parallelization to be included in a definition of parallel programming.
John
I'd say that SIMD was more parallel hardware design, but i guess at some level you have to consider the programming side of having dedicated parallel hardware e.g. what about programming for a GPU?
jk
+12  A: 

No. multithread programming means that you have a single process, and this process generates a bunch of threads. All the threads are running at the same time, but they are all under the same process space: they can access the same memory, have the same open file descriptors, and so on.

Parallel programming is a bit more "general" as a definition. in MPI, you perform parallel programming by running the same process multiple times, with the difference that every process gets a different "identifier", so if you want, you can differentiate each process, but it is not required. Also, these processes are independent from each other, and they have to communicate via pipes, or network/unix sockets. MPI libraries provide specific functions to move data to-and-fro the nodes, in synchronous or asynchronous style.

In contrast, OpenMP achieves parallelization via multithreading and shared-memory. You specify special directives to the compiler, and it automagically performs parallel execution for you.

The advantage of OpenMP is that it is very transparent. Have a loop to parallelize? just add a couple of directives and the compiler chunks it in pieces, and assign each piece of the loop to a different processor. Unfortunately, you need a shared-memory architecture for this. Clusters having a node-based architecture are not able to use OpenMP at the cluster level. MPI allows you to work on a node-based architecture, but you have to pay the price of a more complex and not transparent usage.

Stefano Borini
ow, so it's mean 1 job is processed by n processnot 1 job is processed by n thread
Eko Kurniawan Khannedy
I seem to recall that work _is_ being done on OpenMP-style parallelization for multi-process architectures... I can't remember if it's part of OpenMP itself, or something else?
John
@Eko : not exactly. MPI starts n instances of the same program, each one with a different id number in a special variable (look for MPI_Comm_Rank). What to do with those n instances is up to you.
Stefano Borini
@Stefanook, thanks :D
Eko Kurniawan Khannedy
+6  A: 

Multithreaded programming is parallel, but parallel programming is not necessarily multithreaded.

Unless the multithreading occurs on a single core, in which case it is only concurrent.

Lucas Lindström
like this answer :D
Eko Kurniawan Khannedy
AFAIK, on a single core processor, threading is not parallel. It is concurrent, but not parallel.
Ionuț G. Stan
@Ionut: http://thesaurus.reference.com/browse/concurrent <- If you would look under the 'Synonyms' header of the first result.
Lucas Lindström
@Lucas: the difference I make between concurrent and parallel is that parallel is truly simultaneous, while concurrent just looks as if it was simultaneous. The switch between threads is so fast that it looks as if it were parallel, but it isn't. Maybe there are other terms designating this, but that's what I understand.
Ionuț G. Stan