First, I must say that I have only recently got into functional programming. I am by no means an expert and am grateful for any feedback, in case I've got something wrong. I am hoping that the following is helpful in some way.
Features that are an advantage in concurrent programming:
Flexible execution order: The execution environment might be able to better determine than the programmer in what order expressions ought to be evaluated so that they can be distributed over several threads or CPUs.
However, note that even in functional programming languages, the execution order cannot be shuffled randomly. Take for example this code:
let c = someFunction(a, b)
let d = someFunction(c)
The second expression depends on the result of the first expression (c
). Therefore the first expression will have to be evaluated first.
What really hinders re-arranging execution order most is functions that have side effects, ie. functions that do not simply return a value based on their arguments, but also modify program state in some way (via global or static variables, through file or network I/O, etc.). If a function can mutate a program's state, and if functions do not only depend on (ie. read) their own arguments, but also on the program's state, execution order suddenly becomes very important; because a function may execute in a different environment or context every time that it's called, and it may therefore also return a different value every time. With such functions, it's the programmer's responsibility to manage program state by prescribing the exact order in which the program-state-modifying functions are called.
In functional programming, immutability of data is therefore another very important and related issue. Ideally, there is no mutable program state at all. If all values are mutable, ie. if there are no variables, you'll have a hard time changing a program's state! This, together with side effect-free functions, more or less guarantees that a function call always evaluates to the same value, given the same input, and therefore it doesn't matter when the function will be called (making execution order less important). It doesn't even matter which program thread will call the function, which helps a lot in concurrent programming.
Contrast this with "regular" imperative programs that have state and mutable variables, and probably even Singleton objects. In some instances, those can be viewed as an idealised form of mutable program state, and they're often said to be very problematic in concurrent programming. Because several threads might operate on and mutate a singleton object, you have to work really hard so that it always functions as expected and so that it cannot get corrupted by simultaneous writes.
How a compiler assigns single expressions to different threads
... or even CPUs, I can only guess:
The compiler may attempt to figure out by means of some code analysis how functions depend on one another (not an easy task, since functions can be passed around as values);
or the programmer is allowed to give hints to the execution environment, either through specialized language constructs (async
blocks?) or through library calls;
or it is known that certain operations on certain data structures (such as applying a map/projection function to an array) can almost always be performed in parallel.