Marcelo Cantos gave a pretty good explanation, but I think it can be made slightly more precise.
A type of thing is composable when several instances can be combined in a certain way to produce the same type of thing.
Control structure composability. Languages like C make a distinction between expressions, which can be composed using operators to produce new expressions, and statements, which can be composed using control structures like if
, for
and the "sequence control structure" that simply performs statements in order. The thing about this arrangement is that these two categories are not on an equal footing -- many control structures make use of expressions (e.g. the expression evaluated by if
to choose which branch to execute), but expressions cannot make use of control structures (e.g. you can't return a for
loop). Although it might seem crazy or meaningless to want to "return a for
loop", in fact the general idea of treating control structures as first-class objects that can be stored and passed around is not only possible but useful. In lazy functional languages like Haskell, control structures like if
and for
can be represented as ordinary functions, which can be manipulated in expressions just like any other term, enabling such things as functions that "build" other functions according to the parameters they are passed, and return them to the caller. So while C (for example) divides "the things that a programmer might want to do" into two categories and limits the ways in which objects from these categories can be combined, Haskell (for example) has just one category and imposes no such limits, so in this sense it provides more composability.
Thread composability. I'll assume as Marcelo Cantos did that you are really talking about the composability of threads that use locks/mutexes. This is a slightly trickier case because on the face of it we can have threads that use multiple locks; but the important point is that we can't have threads that use multiple locks with the guarantees that they are intended to have.
We can define a lock as a type of thing that has certain operations that can be performed on it, which come with certain guarantees. One guarantee is: suppose there is a lock object x
, then provided that every process that calls lock(x)
eventually calls unlock(x)
, any call to lock(x)
will eventually return successfully with x
locked by the current thread/process. This guarantee vastly simplifies reasoning about program behaviour.
Unfortunately, if there is more than one lock in the world, it is no longer true. If thread A calls lock(x); lock(y);
and thread B calls lock(y); lock(x);
then it's possible that A grabs lock x
and B grabs lock y
and they will both wait indefinitely for the other thread to release the other lock: deadlock. So, locks are not composable, because when you use more than one, you cannot simply claim that that important guarantee still holds -- not without analysing the code in detail to see how it manages locks. In other words, you can no longer afford to treat functions as "black boxes".
Things that are composable are good because they enable abstractions, meaning that they enable us to reason about code without having to care about all the details, and that reduces the cognitive burden on the programmer.