tags:

views:

44

answers:

1

I am trying to run some tests using OPENmpi processing data in an array by spliting up the work across nodes (the second part is with matricies). I am running into some problems now because the data array is being initialized every time and I don't know how to prevent this from happening.

How, using ANSI C can I create a variable length array, using OPENmpi once? I tried making it static and global, but nothing.

#define NUM_THREADS 4
#define NUM_DATA 1000

static int *list = NULL;

int main(int argc, char *argv[]) {
  int numprocs, rank, namelen;
  char processor_name[MPI_MAX_PROCESSOR_NAME];
  int n = NUM_DATA*NUM_DATA;
  printf("hi\n");
  int i;
  if(list == NULL)
  {
     printf("ho\n");
     list = malloc(n*sizeof(int));

    for(i = 0 ; i < n; i++)
    {
      list[i] = rand() % 1000;
    }
  }

  int position;

  MPI_Init(&argc, &argv);
  MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);
  MPI_Get_processor_name(processor_name, &namelen);
  printf("Process %d on %s out of %d\n", rank,processor_name, numprocs);

  clock_t start = clock();

  position = n / NUM_THREADS * rank;
  search(list,position, n / NUM_THREADS * (rank + 1));

  printf("Time elapsed: %f seconds\n",  ((double)clock() - (double)start) /(double) CLOCKS_PER_SEC);

  free(list);

  MPI_Finalize();
  return 0;
}
+1  A: 

Probably the easiest way is to have the rank 0 process do the initialization while the other processes block. Then once the initialization is done, have them all start their work.

A basic example trying to call your search function (NB: it's dry-coded):

#define NUM_THREADS 4
#define NUM_DATA 1000

int main(int argc, char *argv[]) {
   int *list;
   int numprocs, rank, namelen, i, n;
   int chunksize,offset;
   char processor_name[MPI_MAX_PROCESSOR_NAME];

   n= NUM_DATA * NUM_DATA;

   MPI_Status stat;
   MPI_Init(&argc, &argv);
   MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
   MPI_Comm_rank(MPI_COMM_WORLD, &rank);
   MPI_Get_processor_name(processor_name, &namelen);

   //note you'll need to handle n%NUM_THREADS !=0, but i'm ignoring that for now
   chunksize = n / NUM_THREADS; 

   if (rank == 0) {
      //Think of this as a master process
      //Do your initialization in this process
      list = malloc(n*sizeof(int));

      for(i = 0 ; i < n; i++)
      {
         list[i] = rand() % 1000;
      }

      // Once you're ready, send each slave process a chunk to work on
      offset = chunksize;
      for(i = 1; i < numprocs; i++) {
         MPI_Send(&list[offset], chunksize, MPI_INT, i, 0, MPI_COMM_WORLD);
         offset += chunksize
      }

      search(list, 0, chunksize);

      //If you need some sort of response back from the slaves, do a recv loop here
   } else {

      // If you're not the master, you're a slave process, so wait to receive data

      list = malloc(chunksize*sizeof(int));  
      MPI_Recv(list, chunksize, MPI_INT, 0, 0, MPI_COMM_WORLD, &stat);

      // Now you can do work on your portion
      search(list, 0, chunksize);

      //If you need to send something back to the master, do it here.
   }

   MPI_Finalize();
}
Dusty
Thank you Dusty, that is exactly what I was looking for. I wasn't even thinking about having the rank 0 be the master and the rest the workers, instead I was blinded looking for something else.
amischiefr
No problem. The rank 0 master is a really common concept across MPI, so I have to be careful not to do the opposite and use it as my only tool sometimes. =D
Dusty