mpi

MPI: Printf Statement is not executed at the right time

I have a small program. #include "mpi.h" #include <stdio.h> int main(int argc, char *argv[]) { int rank, size; int buf; int err; MPI_Status status; err = MPI_Init(&argc, &argv); if(err == MPI_SUCCESS) { MPI_Comm_size(MPI_COMM_WORLD, &size); MPI_Comm_rank(MPI_COMM_WORLD, &rank); if(rank == 0) { printf("Buffer size is less than 10\n");...

how to compile MPI and non-MPI version of the same program with automake?

I have a C++ code that can be compiled with MPI support depending on a certain preprocessor flag; missing the appropriate flag, the sources compile to a non-parallel version. I would like to setup the Makefile.am so that it compiles both the MPI-parallel and the sequential version, if an option to ./configure is given. Here's the catch...

MPI_COMM_WORLD handle loses value in a subroutine

my program is as follows: module x use mpi !x includes mpi module implicit none ... contains subroutine do_something_with_mpicommworld !use mpi !uncommenting this makes a difference (****) call MPI_...(MPI_COMM_WORLD,...,ierr) end subroutine ... end module x program main use mpi use x MPI...

Boost.MPI: What's received isn't what was sent!

I am relatively new to using Boost MPI. I have got the libraries installed, the code compiles, but I am getting a very odd error - some integer data received by the slave nodes is not what was sent by the master. What is going on? I am using boost version 1.42.0, compiling the code using mpic++ (which wraps g++ on one cluster and icpc o...

Using php and MPI

I currently have a php file which allows the user to upload a file. Once they upload the file, it runs a program with the file using MPI. The problem is that the script says it cannot find the file .mpd.conf (config file that must be present in users home directory). I'm guessing that this is because it is running as a different user...

Group MPI tasks by host

I want to easily perform collective communications indepandently on each machine of my cluster. Let say I have 4 machines with 8 cores on each, my mpi program would run 32 MPI tasks. What I would like is, for a given function: on each host, only one task perform a computation, other tasks do nothing during this computation. In my examp...

Non blocking receive in mpi+ocaml?

OcamlMpi has the instructions for blocking send and receive. Has anyone done a non-blocking receive for ocamlmpi? ...