Message Passing Interface (MPI) (http://www-unix.mcs.anl.gov/mpi/) is a highly scalable and robust library for parallel programming, geared original towards C but now available in several flavors http://en.wikipedia.org/wiki/Message_Passing_Interface#Implementations. While the library doesn't introduce new syntax, it provides a communication protocol to orchestrate the sharing of data between routines which are parallelizable.
Traditionally, it is used in large cluster computing rather than on a single system for concurrency, although multi-core systems can certainly take advantage of this library.
Another interesting solution to the problem of parallel programming is OpenMP, which is an attempt to provide a portable extension on various platforms to provide hints to the compiler about what sections of code are easily parallelizable.
For example (http://en.wikipedia.org/wiki/OpenMP#Work-sharing_constructs):
#define N 100000
int main(int argc, char *argv[])
{
int i, a[N];
#pragma omp parallel for
for (i=0;i<N;i++)
a[i]= 2*i;
return 0;
}
There are advantages and disadvantages to both, of course, but the former has proven to be extremely successful in academia and other heavy scientific computing applications. YMMV.