views:

150

answers:

2

Has everyone migrated to MPI (message passing interface) or is PVM still widely used in supercomputers and HPC?

+3  A: 

It may be highly site-dependent but in my experience MPI completely dominates PVM in the (academic at least) HPC space. You can't realistically launch a new HPC interconnect without MPI support but PVM seems to be decidedly optional. Is there a PVM implementation for Infiniband for instance?

Per Ekman
Yes. For example, I use PVM over infiniband on both an IBM AIX machine and a Linux cluster running Red Hat. Having said that, scripting PVM to start properly always proves to be an "fun" exercise!
Pete
+4  A: 

My experience is that PVM is not widely utilized in high-performance computing. MPI seems widely used and something like co-array Fortran might be the path forward for massively parallel systems of the future.

I use a library called InterComm to couple physics models together as separate executables. InterComm currently utilizes PVM for communication between these coupled models. PVM and InterComm boast that they work on homogeneous and heterogeneous network environments (I've been told MPI does not support heterogeneous compute/network environments). However, this is a feature that we've never used (and I highly doubt we ever will).

I have had a difficult time running PVM on academic compute environments. Some sys-admin/support-type people at reputable national computing centers have even suggested that we "simply" re-code our 20 year-old O(10^4) line code to use MPI because of issues we ran into while porting the code to a particular supercomputer in which the router/queing environment didn't like launching multiple parallel executables alongside PVM.

If you're at the architecture/design stage of a project, I'd recommend staying away from PVM unless you need to work on heterogeneous compute/network environments!

Pete