mpi

Find underlying compiler in configure

I have an application which is compiled using compiler wrappers such as h5fc/h5cc (the HDF5 compiler wrappers), or mpif90/mpicc (the MPI compiler wrappers). These are just wrappers, and it is possible using the -show argument to see the real underlying compiler, e.g. $ h5fc -show ifort -fPIC [...] -lz -lm $ mpif90 -show ifort [...] -lmp...

What are canonical examples of parallel computation?

I am writing a paper to test a new application that will demonstrate the benefits of parallelized computation (compared to the traditional serialized version of this application). I want to use the canonical examples for parallel computation in my paper. My first example is the parallel computation of pi. I would ideally like an exa...

Spreading a job over different nodes of a cluster in sun grid engine (SGE)

Hey, I'm tryin get sun gridending (sge) to run the separate processes of an MPI job over all of the nodes of my cluster. What is happening is that each node has 12 processors, so SGE is assigning 12 of my 60 processes to 5 separate nodes. I'd like it to assign 2 processes to each of the 30 nodes available, because with 12 processes (d...

How to profile memory usage and performances of an openMPI program in C

Hi, I'm looking for a way to profile my openMPI program in C, i'm using openMPI 1.3 with Linux Ubuntu 9.10 and my programs are run under a Intel Duo T1600. what I want in profile is cache-misses, memory usage and execution time in any part of the program. thanks for reply ...

Which PETSc and MPI for Ubuntu on a dual-core system

Hello, I am working in scientific computing and developing a petsc-based application for a multi-cpu-system. For debugging purposes, I want to install that very software on my own pc, which is a dual-core system running Ubuntu (Karmic Koala). But I do not know which resources to use. There are debian packages, as well as sources-archi...

Add recevied data to existing recieve buffer in MPI_SendRecv

Hi I am trying to send a data(forces) across 2 Process, using MPI_SendRecv. Usually the data will be over written in the received buffer, i do not want to overwrite the data in the received buffer instead i want to add the data it received. i can do following. Store the data in previous time step to a different array and then add it aft...

MPI_RECV receives 0 value when a variable is used as the source, but recieves proper value when the source number is hard coded

I am trying to receive a variable from multiple processed as part of a DO loop. However, the value of the variable is 0 after the operation if I use a variable to represent the processor number. It works fine if I put the processor number in directly. Oddly enough, the exact same code works fine earlier in the program. Any thoughts o...

How to set core dump naming scheme without su/sudo?

Hello, I am developing a MPI program on a Linux machine where I do not have sudo/su access. As my program currently segfaults, I would like to examine the core dumps via gdb. Unfortunately, as the program is multi-threaded, all the threads write to one core dump. So I would like to be able to append the PID to each separate core dump fo...

undefined symbol `MPI_recv'

when i am running my mpi program written in c language.It is giving error "undefined reference to `MPI_recv' " , what should i do to solve this error. ...

MPI column cyclic distribution of 2d array from root to other processes

Hi there, so like i wrote in the title, my problem is, that i have a C program which makes use of the mpi library. I initialised a dynmaic 2d array which dimensions(rows,columns) are read from the stdin at process root. So far so good no big deal. But when i try to distribute the elemnts columns cylic among the others im not making any...

Sync only parts of a c++ vector using Boost.MPI

I have a std::vector (let's call it "data_vector") that I want to synchronize parts of across processors. I.e., I want to send the values from arbitrary indexes in that vector to other processors. I can easily do this with Boost's send() functions if I want to send the whole vector, but I really only need to send a small portion of it. ...

Can't form mpi ring

Hi, I am facing problem in configuring and running MPI on my systems. Here is what I tried: 1) I ran 'mpd &' on one machine and then I ran 'mpdtrace -l' on the same machine. I got this as output: "my-lappy_53430 (127.0.1.1)" 2) On another machine I ran 'mpd -h -p 53430 &' and got this error: akshey-desktop_39993: conn error in c...

any body ever fully diagonalized a 200,000*200,000 symmetric matrix?

it is possible to diagonalize it with matlab on the cluster of my university but i want to do it with fortran and using some parallel algorithm i know "scalapack" can do it (but i do not know how to use it yet) anyone have any suggestions? ...

How to use MPI to organize asynchronous communication?

Hi, Now I plan to use MPI to build a solver that supports asynchronous communication. The basic idea is as follows. Assume there are two parallel processes. Process 1 wants to send good solutions it finds periodically to process 2. and ask for good solutions from process 2 when it needs diversification. My questions is At some poin...

how to run MPI on a laptop?

my os is ubuntu i downloaded mpich by synaptic then i try to compile a code: ifort hello.f i get the error message: Cannot open include file 'mpif.h' it seems that it cannot find mpif.h how to fix it? ...

Unexpected return message from mpirun: "alarm clock"

I have the simplest code "Hello world" #include <stdio.h> /* printf and BUFSIZ defined there */ #include <stdlib.h> /* exit defined there */ #include <mpi.h> /* all MPI-2 functions defined there */ int main(argc, argv) int argc; char *argv[]; { int rank, size, length; char name[BUFSIZ]; MPI_Init(&argc, &argv); MPI_Comm...

Round-robin processing with MPI (off by one/some)

I have an MPI implementation basically for IDW2 based gridding on a set of sparsely sampled points. I have divided the jobs up as follows: All nodes read all the data, the last node does not need to but whatever. Node0 takes each data point and sends to nodes 1...N-1 with the following code: int nodes_in_play = NNodes-2; for(int i=0;i...

How to parallelize this situation with robots

Hello, I'm working on a robotic problem. The situation is something like this: There are N number of robots (generally N>100) initially all at rest. Each robot attracts all other robots which are with in its radius r. I've set of equations with which I can compute acceleration, velocity & hence the position of the robot after time del...

Problem with MPICH2 & mpi4py Installation

Hello, I'm on Windows XP2 32-bit machine. I'm trying to install MPICH2 & mpi4py. I've downloaded & installed MPICH2-1.2.1p1 I've downloaded & mpi4py When I run python setup.py install in mpi4pi\ directory. I get running install running build running build_py running build_ext MPI configuration: directory 'C:\Program Files\MPICH2' M...

MPI on PBS cluster Hello World

I am using mpiexec to run a couple of hello world executables. They each run, but the number of processes is always 1 where it looks like there should be 4 processes. Does someone understand why? Also I'm not sure why stty is giving me an invalid argument. Thanks! Here is the output: /bin/stty: standard input: invalid argument...