mpi

MPI signal handling

When using mpirun, is it possible to catch signals (for example, the SIGINT generated by ^C) in the code being run? For example, I'm running a parallelized python code. I can except KeyboardInterrupt to catch those errors when running python blah.py by itself, but I can't when doing mpirun -np 1 python blah.py. Does anyone have a sugge...

MPI user-defined datatypes, is what I'm doing safe?

First time using MPI outside some simple practice apps, and something's not going right. I have a class defined with the following members (methods omitted for the sake of readability and conserving screen space): class particle { public: double _lastUpdate; float _x, _y, _xvel, _yvel; bool _isStatic; bool _isForeign; ...

Mpi function define

I wrote a program in c using MPI (Message Passing Inteface) that compute recursively the inverse of a lower triangular matrix. Every cpu sends 2 submatrices to other two cpus, they compute them and they give them back to the cpu caller. When the cpu caller has its submatrices it has to perform a matrix multiplication. In the recurrence ...

Need help attaching gdb to my project

I use VS2k8 to write and compile (but not run) a program using the MPICH2 libraries on Vista x64. I then use mpiexec from the command line to launch the program (with only 1 process for the purposes of debugging), and I'd like to attach gdb to it. Simply using attach or gdb --pid=### doesn't work (I get the error Can't attach to process)...

.NET MPI implementation?

What is the most mature .NET MPI implementation? A quick google search turned up the two below, but I'm not familiar with either of them. I believe the first item (mpi.net) is based on Microsoft MPI. Any thoughts? http://www.osl.iu.edu/research/mpi.net/ http://www.purempi.net/ ...

Which python mpi library to use?

I'm starting work on some simulations using MPI and want to do the programming in Python/scipy. The scipy site lists a number of mpi libraries, but I was hoping to get feedback on quality, ease of use, etc from anyone who has used one. ...

Performing BLAST/SmithWaterman searches directly from my application

I'm working on a small application and thinking about integrating BLAST or other local alignment searches into my application. My searching has only brought up programs, which need to be installed and called as an external program. Is there a way short of me implementing it from scratch? Any pre-made library perhaps? ...

MPI array syncronization

Hello, I am learning MPI, so i though i could write simple odd even sort for 2 processors. First processor sorts even array elements and second odd array elements. I'm using global array for 2 processors, so i need synchronization(something like semaphore or lock variable) because i get bad results. How this problem is solved in MPI ? M...

What are some scenarios for which MPI is a better fit than MapReduce?

As far as I understand, MPI gives me much more control over how exactly different nodes in the cluster will communicate. In MapReduce/Hadoop, each node does some computation, exchanges data with other nodes, and then collates its partition of results. Seems simple, but since you can iterate the process, even algorithms like K-means or P...

Boost.MPI problem

Hello all, I'm working on an HPC. And on that HPC an old version of Boost was installed and that boost library doesn't have Boost.MPI. I requested from Admins to install it on the HPC. But they requested from me to install it on my home directory. So i installed both boost and boost.mpi on my home directory. Boost library seems to work ...

MPI overhead in shared memory setup.

I want parallelize a program. It's not that difficult with threads working on one big data-structure in shared memory. But I want to be able to use distribute it over cluster and I have to choose a technology to do that. MPI is one idea. The question is what overhead will have MPI (or other technology) if I skip implementation of speci...

Best books to learn MPI programming

What are the best book for learning MPI (C,C++ implementation) ? ...

Is the PVM (parallel virtual machine) library widely used in HPC?

Has everyone migrated to MPI (message passing interface) or is PVM still widely used in supercomputers and HPC? ...

GCC performance

Hello, I am doing parallel programming with MPI on Beowulf cluster. We wrote parallel algorithm for simulated annealing. It works fine. We expect 15 time faster execution than with serial code. But we did some execution of serial C code on different architectures and operating systems just so we could have different data sets for perfor...

Is MPI good for high-volume soft-realtime IPC ?

If I had a single server and I had two process types A(Many processes many threads) and B(one process n-threads with n-cpu's), and I wanted to send a LARGE amount of one-way messages from A to B. Is MPI a better implementation for this than a custom implementation using: Unix Domain Sockets Windows Named Pipes Shared Memory I was thi...

How to force a MPI application to open on second monitor (Windows)

I use a visualization software as a parallel MPI (MPICH2) program running on a cluster to drive a tiled display (ParaView). The OS is Windows XP. Each node in that cluster has two graphic cards. To each graphic card one monitor is connected. The first monitor is for administrative usage. The second monitor (output) is connected to a beam...

shared memory, MPI and queuing systems

My unix/windows C++ app is already parallelized using MPI: the job is splitted in N cpus and each chunk is executed in parallel, quite efficient, very good speed scaling, the job is done right. But some of the data is repeated in each process, and for technical reasons this data cannot be easily splitted over MPI (...). For example: 5...

Mpi usage problem

I installed mpi into windows. I can use its libraries. The problem is that in windows when i write mpiexec -n 4 proj.exe into command prompt it does not make the proper operations. 4 different processes uses the whole code file seperately. They dont behave like parallel processes that are working only in the MPI_Init and MPI_Finalize row...

MPI buffered send/receive order

I'm using MPI (with fortran but the question is more specific to the MPI standard than any given language), and specifically using the buffered send/receive functions isend and irecv. Now if we imagine the following scenario: Process 0: isend(stuff1, ...) isend(stuff2, ...) Process 1: wait 10 seconds irecv(in1, ...) irecv(in2, ...) ...

MPI Barrier C++

Dear all, I want to use MPI (MPICH2) on windows. I write this command: MPI_Barrier(MPI_COMM_WORLD); And I expect it blocks all Processors until all group members have called it. But it is not happen. I add a schematic of my code: int a; if(myrank == RootProc) a = 4; MPI_Barrier(MPI_COMM_WORLD); cout << "My Rank = " << my...