tags:

views:

146

answers:

2

Hi,

I just wonder how to convert the following openMP program to a MPI program

#include <omp.h>  
#define CHUNKSIZE 100  
#define N     1000  

int main (int argc, char *argv[])    
{  

int i, chunk;  
float a[N], b[N], c[N];  

/* Some initializations */  
for (i=0; i < N; i++)  
  a[i] = b[i] = i * 1.0;  
chunk = CHUNKSIZE;  

#pragma omp parallel shared(a,b,c,chunk) private(i)  
  {  

  #pragma omp for schedule(dynamic,chunk) nowait  
  for (i=0; i < N; i++)  
    c[i] = a[i] + b[i];  

  }  /* end of parallel section */  

return 0;  
}  

I have a similar program that I would like to run on a cluster and the program is using OpenMP.

Thanks!


UPDATE:

In the following toy code, I want to limit the parallel part within function f():

#include "mpi.h"  
#include <stdio.h>  
#include <string.h>  

void f();

int main(int argc, char **argv)  
{  
printf("%s\n", "Start running!");  
f();  
printf("%s\n", "End running!");  
return 0;  
}  


void f()  
{  
char idstr[32]; char buff[128];  
int numprocs; int myid; int i;  
MPI_Status stat;  

printf("Entering function f().\n");

MPI_Init(NULL, NULL);  
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);  
MPI_Comm_rank(MPI_COMM_WORLD,&myid);  

if(myid == 0)  
{  
  printf("WE have %d processors\n", numprocs);  
  for(i=1;i<numprocs;i++)  
  {  
    sprintf(buff, "Hello %d", i);  
    MPI_Send(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD); }  
    for(i=1;i<numprocs;i++)  
    {  
      MPI_Recv(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD, &stat);  
      printf("%s\n", buff);  
    }  
}  
else  
{  
  MPI_Recv(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD, &stat);  
  sprintf(idstr, " Processor %d ", myid);  
  strcat(buff, idstr);  
  strcat(buff, "reporting for duty\n");  
  MPI_Send(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD);  
}  
MPI_Finalize();  

printf("Leaving function f().\n");  
}  

However, the running output is not expected. The printf parts before and after the parallel part have been executed by every process, not just the main process:

$ mpirun -np 3 ex2  
Start running!  
Entering function f().  
Start running!  
Entering function f().  
Start running!  
Entering function f().  
WE have 3 processors  
Hello 1 Processor 1 reporting for duty  

Hello 2 Processor 2 reporting for duty  

Leaving function f().  
End running!  
Leaving function f().  
End running!  
Leaving function f().  
End running!  

So it seems to me the parallel part is not limited between MPI_Init() and MPI_Finalize().

+1  A: 

You just need to assign a portion of the arrays (a, b, c) to each process. Something like this:

#include <mpi.h>

#define N 1000

int main(int argc, char *argv[])
{
  int i, myrank, myfirstindex, mylastindex, procnum;
  float a[N], b[N], c[N];

  MPI_Init(&argc, &argv);
  MPI_Comm_size(MPI_COMM_WORLD, &procnum);
  MPI_Comm_rank(comm, &myrank);


  /* Dynamic assignment of chunks,
   * depending on number of processes
   */
  if (myrank == 0)
    myfirstindex = 0;
  else if (myrank < N % procnum)
    myfirstindex = myrank * (N / procnum + 1);
  else
    myfirstindex = N % procnum + myrank * (N / procnum);

  if (myrank == procnum - 1)
    mylastindex = N - 1;
  else if (myrank < N % procnum)
    mylastindex = myfirstindex + N / procnum + 1;
  else
    mylastindex = myfirstindex + N / procnum;

  // Initializations
  for(i = myfirstindex; i < mylastindex; i++)  
    a[i] = b[i] = i * 1.0; 

  // Computations
  for(i = myfirstindex; i < mylastindex; i++)
    c[i] = a[i] + b[i];

  MPI_Finalize();
}
3lectrologos
Thanks, 3lectrologos! My actual problem is a little more complicated. I stated it in http://stackoverflow.com/questions/2156714/how-to-speed-up-this-problem-by-mpi. Please take a look. Thanks in advance.
Tim
Thank you, 3lectrologos! I just added some updates to my questions to show that it seems not true that the parallel part begins with MPI_Init and ends with MPI_Finilize.
Tim
A: 

Hello

You can try to use Intel Cluster OpenMP. It will run OpenMP programs on cluster. Yes, it simulates shared memory computer on distributed memory clusters using the "Software Distributed shared memory" http://en.wikipedia.org/wiki/Distributed_shared_memory

It is easy to use and included in Intel C++ Compiler (9.1+). But it works only on 64-bit processors

osgx