views:

819

answers:

4

Hi, I have a program, that is been implemented in C++, now I want to add MPI support. The point is exist a MPI bind for C++, with namespace MPI and everything. In my case I have a specific object that is suitable to be the parallelized process into the cluster.

Question, does anyone already did something like this ? Can give some advices ? How to best implement this ? How to start the MPI inside the constructor ? After start the MPI inside the constructor of the Class, all the intermediate calls will be parallelized too ? e.g., MyClass obj;

x =x; //this will be parallelized ? onj.calc();

y = x++; //this will be parallelized ?

z = obj.result();

Thanks for any help.

+4  A: 

I would really recommend picking up the Gropp MPI Book, it really helps for basic MPI!

Ed Woodcock
Thanks the examples will help a lot.
lvcargnini
but it is using MPI c++ bindings as a normal c application, that is my point. How this will work in OO programming
lvcargnini
The MPI bindings are simply method calls, in reality, and have very little resemblance to OO programming, you have to just use them as they were intended, methods to pass information from one computer to another.
Ed Woodcock
+5  A: 

MPI doesn't parallelize anything automatically, it only gives you an interface for sending data between nodes. Your code is written and runs as usual sequential code independently on each node and every once in a while you send data to some other node or try to receive data from some other node.

sharptooth
Not exactly it start the same process in N nodes, than using message I share and update the information among the nodes also, scatter and gather sometimes concentrating in one specific node or updating the values in all nodes, doing what is called reduction. My point is how this will behave in OO ? The object will be parallelized only ? so the MPI calls used inside of the Object will be seen only inside of the object ? or the environment outside of the object is affected too ? – lvcargnini 15 mins ago
lvcargnini
What would be the point of spawing the same exact process on different nodes without issuing new data??? The MPI's I have seen are written for a C environment or FORTRAN... there is no OOP for MPI in these implementations, so I'm not sure why you are asking about OOP... if you put MPI code inside of a method for a class, the MPI code is called when that method executes, just like any other code you would put in the method.
San Jacinto
+2  A: 

As background information:

Most applications that use MPI are written in Fortran or C. Every major implementation of MPI is written in C.

Support for MPI C++ bindings is sketchy at best: Some of the MPI Datatypes are not available (e.g. MPI_DOUBLE), there are issues with I/O and the order that headers are included in the source files. There are issues of name mangling if the MPI library was built with C and the application is built with Fortran or C++. mpich2 FAQ has entries to help work through these issues. I am less familiar with Open MPI and it's particular behavior with Fortran and C++.

For your specific questions:

I think that you have a fundamental mis-understanding about what MPI is and is not, and how application code should interact with the MPI libraries.

Parallel Programming with MPI is an excellent reference for learning to program with MPI. The code examples are in C, and most of the MPI API's are shown in an example. I highly recommend that you work through this book to learn what parallel programing is and is not.

semiuseless
+1 for pointing out the fundamental misunderstanding.
San Jacinto
I received two different e-mails in another list:"MPI has itself an object-oriented design, so this should be no problem. I would discourage you to use the C++ bindings, since (to my knowledge) they might be removed from MPI 3.0 (there is such a proposal).""There is a proposal that has passed one vote so far to deprecate the C++ bindings in MPI-2.2 (meaning: still have them, but advise against using them). This opens the door for potentially removing the C++ bindings in MPI-3.0."
lvcargnini
I would not worry too much about MPI 3.0 just yet. First, there is no hard release date for 3.0. Second, all major implementations of MPI will continue to support the 2.0 standard for a looooong time to allow an easy transition for their users/customers. As a fallback, you can always grab a copy of the last MPI release that does support the 2.0 standard and squirrel it away for the remaining life of your application.
semiuseless
+4  A: 

Chiming in on an old thread, I found OpenMPI and Boost::MPI nice to work with. The object oriented design of the library may be a bit bolted on, but I found it much nicer to work with than pure MPI, especially with auto-serialization of many types and a rather extendable interface for gather/reduce functions as well as serialization of user types.

Xorlev