views:

13

answers:

1

I'm currently working on a cluster using the ClusterVisionOS 3.1. This will be my first time working with a cluster, so i probably haven't tried the "obvious".

I can submit a single job to the cluster with the "qsub" command(this i got working properly) But the problem starts when submitting multiple jobs at once. I could write a script sending them all at once, but then all nodes would be occupied with my jobs and there are more people here wanting to submit their job.

So here's the deal: 32 nodes (4 processors/slots each)

The best thing would be to tell the cluster to use 3 nodes (12 processors) and queue all my jobs on these nodes/processors, if this is even possible. If i could let the nodes use 1 processor for each job, then that would be perfect.

A: 

Ok, so i guess i found out, there is no solution to this problem. My personal solution is write a script that connects through ssh to the cluster and then just let the script check how many jobs are already running under your user name. The script checks if that number does not exceed, lets say, 20 jobs at the same time. As long as this number is not reached it keep submitting jobs.

Maybe its an ugly solution, but a working one!

About the processor thing, the jobs were already submitted to different single processors, fully utilizing the full extent of the nodes.

lugte098