Broadly, there are two basic mechanisms.
The simplest is shared memory. Both processes have access to some memory that can be used read or write, such that writes in one are visible in the reads of the other.
The other mechanism is channels, which acts like a pipe between the two processes. In this case, one process puts some data into the pipe, and the other process pulls it out. This mechanism is destructive, once consumed, the data is lost from the pipes, so the receiving process better do something with it.
Although the first case sounds simpler, in practice it is wrought with peril. If both processes try to write at the same time, who knows what will happen. To avoid that, a third type of IPC mechanism is used, locks, which are used to signal from one process to the other when it's ok to do something to the shared state.
From a theoretical point of view, they are all equivalent. Most operating systems provide all of these mechanisms.
But concurrent processes do not have to communicate. in the "Shared Nothing" Model, a single master task prepares a number of work tasks. The work tasks perform a calculation without additional input. when all of the workers are done, the master task can produce a result. This is attractive because IPC comes with a perfomance cost (synchronization), and shared nothing sidesteps synchronization entirely.