views:

63

answers:

2

I've got a one to all broadcast method for a hypercube, written using MPI:

one2allbcast(int n, int rank, void *data, int count, MPI_Datatype dtype)
{
  MPI_Status status;
  int mask, partner;
  int mask2 = ((1 << n) - 1) ^ (1 << n-1);

  for (mask = (1 << n-1); mask; mask >>= 1, mask2 >>= 1)
  {
    if (rank & mask2 == 0)
    {
      partner = rank ^ mask;
      if (rank & mask)
       MPI_Recv(data, count, dtype, partner, 99, MPI_COMM_WORLD, &status);
      else
       MPI_Send(data, count, dtype, partner, 99, MPI_COMM_WORLD);
    }
  }
}

Upon calling it from main:

int main( int argc, char **argv )
{
  int n, rank;

  MPI_Init (&argc, &argv);
  MPI_Comm_size (MPI_COMM_WORLD, &n);
  MPI_Comm_rank (MPI_COMM_WORLD, &rank);

  one2allbcast(floor(log(n) / log (2)), rank, "message", sizeof(message), MPI_CHAR);

  MPI_Finalize();

  return 0;
}

compiling and executing on 8 nodes, I receive a series of errors reporting that processes 1, 3, 5, 7 were stopped before the point of receiving any data:

MPI_Recv: process in local group is dead (rank 1, MPI_COMM_WORLD)
Rank (1, MPI_COMM_WORLD): Call stack within LAM:
Rank (1, MPI_COMM_WORLD):  - MPI_Recv()
Rank (1, MPI_COMM_WORLD):  - main()
MPI_Recv: process in local group is dead (rank 3, MPI_COMM_WORLD)
Rank (3, MPI_COMM_WORLD): Call stack within LAM:
Rank (3, MPI_COMM_WORLD):  - MPI_Recv()
Rank (3, MPI_COMM_WORLD):  - main()
MPI_Recv: process in local group is dead (rank 5, MPI_COMM_WORLD)
Rank (5, MPI_COMM_WORLD): Call stack within LAM:
Rank (5, MPI_COMM_WORLD):  - MPI_Recv()
Rank (5, MPI_COMM_WORLD):  - main()
MPI_Recv: process in local group is dead (rank 7, MPI_COMM_WORLD)
Rank (7, MPI_COMM_WORLD): Call stack within LAM:
Rank (7, MPI_COMM_WORLD):  - MPI_Recv()
Rank (7, MPI_COMM_WORLD):  - main()

Where do I go wrong?

A: 

It is a common error when a MPI communication is requested after a MPI_Finalize is called. Before calling MPI_Finalize make test if all MPI calls are done.

zoli2k
Ok, well, where to call MPI_Finalize, then? Do I need a barrier before that, or what else?
luvieere
yes, you need to set a barrier before that. For example you can setup a process to wait a "all job is done message" from all other processes and then call MPI_Finalize.
zoli2k
A: 

It turns out that the error was in the line

if (rank & mask2 == 0)

where I haven't accounted for operator priority. The correct and functioning way of writing it is

if ((rank & mask2) == 0)

where the bitwise & gets evaluated first.

luvieere