If I had a single server and I had two process types A(Many processes many threads) and B(one process n-threads with n-cpu's), and I wanted to send a LARGE amount of one-way messages from A to B. Is MPI a better implementation for this than a custom implementation using:
- Unix Domain Sockets
- Windows Named Pipes
- Shared Memory
I was thinking of writing my own library based on 1 and 2, and I am also wondering if 3 is better since the shared memory would require locking.
Process A provides external services so B's resource usage and the message passing in general needs to consume as little resources as possible, and A could be implemented in both blocking or non-blocking when it sends messages. Resource usage of B and the message passing needs to scale linearly with A's usage.
I eventually need broadcasting capability between machines as well. Probably for process B.
My parting question is: is MPI (openMPI in particular) a good library for this, and does it use the most optimal kernel primitives on various operating systems.