I've got processes that need to be farmed out over a cluster that supports PBS, however, due to limitations with the process, I can only run one process per node at a time. Each node has two processors, the ghetto approach would be to simply request two processors per job. But that wastes a core per job. Is it possible to request a singl...
I have a Debian cluster with 2 nodes and two quad-core processors each. I use Torque and Maui as scheduler. When I try to run an MPI job with 16 processes, the scheduler is not able to run the job: either it puts it to the queue (although there is not any job runing at that moment) or runs and the resulting output file says that you was ...
Hi,
I'm trying to use a cluster for the first time so it's maybe a stupid question...
I would like to run the same program with various parameters in an independant way on multiple nodes.
Basically, I would like to compile and process various versions of the linux kernel (one node/one version and the version number is the parameter t...
Hi
I Have a platform builder wince 6 and I want build wince 5.is that imposible?
my platform builder add to visual studio 2005.
thanks.
...
Hi,
we are sending jobs using PBS to an external cluster and we get sometimes errors. For us this is hard to debug because we can not easily contact the administrator and so on. I thought about the possibility of some kind of Virtual Machine but which is a "Virtual Server" with PBS installed there, so I can play all games and test my jo...
My unix/windows C++ app is already parallelized using MPI: the job is splitted in N cpus and each chunk is executed in parallel, quite efficient, very good speed scaling, the job is done right.
But some of the data is repeated in each process, and for technical reasons this data cannot be easily splitted over MPI (...).
For example:
5...
I have a 64-node cluster, running PBS Pro. If I submit many hundreds of jobs, I can get 64 running at once. This is great, except when all 64 jobs happen to be nearly I/O bound, and are reading/writing to the same disk. In such cases, I'd like to be able to still submit all the jobs, but have a max of (say) 10 jobs running at a given ...
Hi,
I am learning OpenMPI on a cluster. Here is my first example. I expect the output would show response from different nodes, but they all respond from the same node node062. I just wonder why and how I can actually get report from different nodes to show MPI actually is distributing processes to different nodes? Thanks and regards!...
Hi,
some short and probably stupid questions about PBS:
1- I submit jobs using
qsub job_file
is it possible to submit a (sub)job inside a job file?
2- I have the following script:
qsub job_a
qsub job_b
For launching job_b, it would be great to have before the results of job_a finished. Is it possible to put some kind of barrie...
Does anybody know a Java implementation of the DRMAA-API that is known to work with PBS/Torque cluster software?
The background behind this: I would like to submit jobs to a newly set-up linux cluster from Java using a DRMAA compliant API. The cluster is managed by PBS/Torque. Torque includes PBS DRMAA 1.0 library for Torque/PBS that co...
Hello All,
Is there a way to specify the ppn ( or equivalent ) in SGE ? i don't want to use all cpus in one node so i will be able to have more memory per core. ( In PBS you would do -l nodes=16:ppn=2 for exemple)
Thanks.
...
I am running R on a multiple node Linux cluster. I would like to run my analysis on R using scripts or batch mode without using parallel computing software such as MPI or snow.
I know this can be done by dividing the input data such that each node runs different parts of the data.
My question is how do I go about this exactly? I am ...
I have a function (neural network model) which produces figures. I wish to test several parameters, methods and different inputs (meaning hundreds of runs of the function) from python using PBS on a standard cluster with Torque.
Note: I tried parallelpython, ipython and such and was never completely satisfied, since I want something sim...
I work in a research group and we use the PBS queuing system. I'm no PBS master, but I wanted to script a search for if a job was running. To do this I first grab a string of all the jobs by using the results of a qstat call as my argument to qstat -f and then taking the detailed list of all jobs and searching it for the submitted file...
I would like to run a script when all of the jobs that I have sent to a server are done.
for example, I send
ssh server "for i in config*; do qsub ./run 1 $i; done"
And I get back a list of the jobs that were started. I would like to automatically start another script on the server to process the output from these jobs once all are c...
I am using mpiexec to run a couple of hello world executables. They each run, but the number of processes is always 1 where it looks like there should be 4 processes. Does someone understand why? Also I'm not sure why stty is giving me an invalid argument. Thanks!
Here is the output:
/bin/stty: standard input: invalid argument...