views:

807

answers:

7

Hi all

I have a question: what is the difference between concurrent programming and parallel programing? I asked google but didn't find anything that helped me to understand that difference. Could you give me an example for both?

For now I found this explanation: http://www.linux-mag.com/id/7411 - but "concurrency is a property of the program" vs "parallel execution is a property of the machine" isn't enough for me - still I can't say what is what.

Thanks in advance for any answer.

+2  A: 

I believe concurrent programming refers to multithreaded programming which is about letting your program run multiple threads, abstarcted from hardware details.

Parallel programming refers to specifically designing your program algorithms to take advantage of available parallel execution. For example, you can execute in parallel two branches of some algorithms in expectation that it will hit the result sooner (on average) than it would if you first checked the first then the second branch.

Developer Art
To put it another way, executing two things in parallel can get them done twice as fast. Executing two things concurrently might still take the same amount of time as doing first one and then the other if there is just one CPU time-slicing back and forth between running a bit of the first and then a bit of the second, etc.
A: 

My initial reaction is that concurrent refers to lots of users or lots of processes running at the same time, while paralllel refers to a single process split amongst many cores/threads.

Paul Creasey
+1  A: 

They're two phrases that describe the same thing from (very slightly) different viewpoints. Parallel programming is describing the situation from the viewpoint of the hardware -- there are at least two processors (possibly within a single physical package) working on a problem in parallel. Concurrent programming is describing things more from the viewpoint of the software -- two or more actions may happen at exactly the same time (concurrently).

Jerry Coffin
+13  A: 

If you program using threads (concurrent programming), it's not necessarily going to be executed as such (parallel execution), since it depends on whether the machine can handle several threads.

Here's a visual example. Threads on a non-threaded machine:

       - - -
     /       \
>---- - - - - ----->>

Threads on a threaded machine:

      ---
     /   \
>-------------->>

The dots represent executed code. As you can see, they both split up and execute separately, but the threaded machine can execute several separate pieces at once.

Tor Valamo
Awesome diagram.
Aequitarum Custos
ASCII art is your friend!-)
Tor Valamo
A: 

I understood the difference to be:

1) Concurrent - running in tandem using shared resources 2) Parallel - running side by side using different resources

So you can have two things happening at the same time independent of each other, even if they come together at points (2) or two things drawing on the same reserves throughout the operations being executed (1).

Jonathan
A: 

First off, I am searching for an answer myself. So the view that I express below is more of a question than an answer.

Could it be the case that in case of concurrent programming, the hardware details are abstracted away.

But in case of parallel programming, they are NOT. Parallel programs are explicitly designed keeping the overheads of executing concurrent parts of a program in mind, say the communication overhead enforced by the hardware, or the library API being used, or some such.

~Ashish

Ashish
A: 

Concurrent programming regards operations that appear to overlap and is primarily concerned with the complexity that arises due to non-deterministic control flow. The quantitative costs associated with concurrent programs are typically both throughput and latency. Concurrent programs are often IO bound but not always, e.g. concurrent garbage collectors are entirely on-CPU. The pedagogical example of a concurrent program is a web crawler. This program initiates requests for web pages and accepts the responses concurrently as the results of the downloads become available, accumulating a set of pages that have already been visited. Control flow is non-deterministic because the responses are not necessarily received in the same order each time the program is run. This characteristic can make it very hard to debug concurrent programs. Some applications are fundamentally concurrent, e.g. web servers must handle client connections concurrently. Erlang is perhaps the most promising upcoming language for highly concurrent programming.

Parallel programming concerns operations that are overlapped for the specific goal of improving throughput. The difficulties of concurrent programming are evaded by making control flow deterministic. Typically, programs spawn sets of child tasks that run in parallel and the parent task only continues once every subtask has finished. This makes parallel programs much easier to debug. The hard part of parallel programming is performance optimization with respect to issues such as granularity and communication. The latter is still an issue in the context of multicores because there is a considerable cost associated with transferring data from one cache to another. Dense matrix-matrix multiply is a pedagogical example of parallel programming and it can be solved efficiently by using Straasen's divide-and-conquer algorithm and attacking the sub-problems in parallel. Cilk is perhaps the most promising language for high-performance parallel programming on shared-memory computers (including multicores).

Jon Harrop