Hi All ,
I am planning to enhance my knowledge about parallel and concurrent programming. Can somebody help me to find out some online learning resources?
Thanks,
Hi All ,
I am planning to enhance my knowledge about parallel and concurrent programming. Can somebody help me to find out some online learning resources?
Thanks,
If you're using a POSIX-based system(Linux, FreeBSD, Mac OS X, etc.), you'll want to check out pthreads(link to tutorial). Pthreads have been around for a long time and are the de-facto standard for concurrent programming on POSIX-based platforms.
There is a newcomer though, known as Grand Central Dispatch(link to tutorial). The technology was developed by Apple(in Snow Leopard) in an attempt to solve some of the tedious problems associated with pthreads and multithreaded programming in general. Concretely:
Blocks(anonymous functions) are introduced to the C language(by extension, C++ and Objective-C). This allows you to completely avoid using context structs. In an example(heavily using pseudocode), you might write something like this using pthreads:
typedef struct { int val1; int val2; } context;
int main(){
int firstval = 5;
int secondval = 2;
context *c = malloc(sizeof(context));
c->val1 = firstval;
c->val2 = secondval;
create_new_thread(context, myFunct);
}
void myFunct(context *c){
printf("Contrived example %d %d", c->val1, c->val2);
}
That involved a lot of work - we had to create the context, setup the values, and make sure our function handled receiving the context correctly. Not so with GCD. We can instead write the following code:
int main(){
int firstval = 5;
int secondval = 2;
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(queue, ^{
printf("Contrived example %d %d", firstval, secondval);
});
}
Notice how much simpler that was! No contexts, not even a separate function.
GCD lets the kernel manage the thread-count. Each thread on your system consumes some kernel resources. On a portable, excess threads translates into reduced battery life. On any computer, excess threads translates to reduced performance. What does "excess" mean? Spawning 100s of threads on a 2-core machine. When using pthreads, you had to manage the threadcount explicitly, making sure that you weren't overloading the system. Of course, this is very hard to do. With GCD, you simply tell the kernel "Execute this block of work when you have the chance" - the kernel decides when it has enough free resources to run the bit of code - you don't have to worry about this.
In addition to providing great basic multithreading support, GCD also allows your program to interact with "sources" via blocks. So, you can enqueue a file descriptor and tell GCD "run this block of code when there's new data to be read." And so the kernel will let your program sit idle until a significant enough amount of data comes in, and then enqueue your block automatically!
And I've only touched the surface on what GCD can do. It's a truly amazing technology, and I highly recommend you check out the docs. It's currently available on Mac OS X and FreeBSD, and it's open source - so if you want it to run on Linux, you can port it :).
If you're looking for raw power for data-parallel applications, Apple developed another great technology(also for Snow Leopard) called OpenCL, which allows you to harness the power of the GPU in a very simple C-like(it's almost exactly C with a few caveats) language. I haven't had too much experience with this, but from everything I've heard, it's easy to use and very powerful. OpenCL is an open standard, with implementations on Mac OS X, and Windows.
So, to sum it up: pthreads for all POSIX-based systems(it's ugly, but it's the de-facto standard), GCD for Mac OS X and FreeBSD, and OpenCL for data-parallel applications where you need all the power you can get!