views:

113

answers:

6

All of the concurrent programs I've seen or heard details of (admittedly a small set) at some point use hardware synchronization features, generally some form of compare-and-swap. The question is: any there any concurrent programs in the wild where the thread interact throughout there life get away without any synchronization?

Example of what I'm thinking of include:

A program that amounts to a single thread running a yes/no test on a large set of cases and a big pile of threads tagging cases based on a maybe/no tests doesn't need synchronization because dirty data will only effect performance rather than correctness.

A program that has many threads are updating a data structure where any state that is valid now, will always be valid, so dirty reads or writes don't invalidate anything. An example of this is (I think) path compression in the union-find algorithm.

+2  A: 

If you can break work up into completely independent chunks, then yes there are concurrent algorithms whose only synchronisation point is the one at the end of the work where all threads join. Parallel speedup is then a factor of being able to break into tasks whose sizes are as similiar as possible.

Mitch Wheat
True, but not what I was thinking of.
BCS
A: 

I agree with Mitch's answer. I would like to add that the ray tracing algorithm can work without synchronization until the point where all threads join.

batbrat
+1  A: 

I think it's a bit trick question because e.g. if you program in C, malloc() must be multi-thread safe and uses hardware synchronization, and in Java the garbage collector requires hardware synchronization anyway. All Java programs require the GC, and hardly any C program makes it without malloc() (or C++ program / new() operator).

antti.huima
"hardly any C program makes it without malloc". Plenty of programs get all their allocation out of the way upfront, though, before entering the fiddly performance-critical hopefully-parallelizable phase of operation.
Steve Jessop
@Steve: My thought exactly.
BCS
+1  A: 

There is a whole class of algorithms which are sometimes referred to as "embarallel" (contraction of "embarrassingly parallel"). Many image processing algorithms fall into this class, where each pixel may be processed independently (which makes implementation with e.g. SIMD or GPGPU very straightforward).

Paul R
See my edit and my comment to Mitch Wheat.
BCS
+1  A: 

Well, without any synchronization at all (even at the end of the algorithm) you obviously can't do anything useful because you can't even transfer the results of concurrent computations to the main thread: suppose that they were on remote machines without any communication channels to the main machine.

jkff
Good point, I was assuming shared memory of some kind, just with very relaxed prorogation/viability requirements.
BCS
+2  A: 

Some indirect methods for solving systems of linear equations, like Successive over-relaxation ( http://en.wikipedia.org/wiki/Successive_over-relaxation ), don't really need the iterations to be synchronized.