tags:

views:

1284

answers:

4

I want to prevent my script running more than once at a time.

My current approach is

  • create a semaphore file containing the pid of the running process
  • read the file, if my process-id is not in it exit (you never know...)
  • at the end of the processing, delete the file

In order to prevent the process from hanging, I set up a cron job to periodically check the file if its older then the maximum allowed running time and kills the process if it’s still running.

Is there a risk that I'm killing a wrong process?

Is there a better way to perform this as a whole?

-Thanks!

+13  A: 

Use flock(1) to make an exclusive scoped lock a on file descriptor. This way you can even synchronize different parts of the script.

#!/bin/bash

# Makes sure we exit if flock fails.
set -e

(
  # Wait for lock on /var/lock/.myscript.exclusivelock (fd 200) for 10 seconds
  flock -x -w 10 200

  # Do stuff

) 200>/var/lock/.myscript.exclusivelock

This ensures that code between "(" and ")" is run only by one process at a time and that the process does wait for a lock too long.

Caveat: this particular command is a part of util-linux-ng. If you run an operating system other than Linux, it may or may not be available.

Alex B
Works! Great answer! Many Thanks!
Oli
Apparently it's missing in Debian etch, but will be available in lenny:http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=435272
Bruno De Fraine
A: 

I'd change the step two to the following:

  • read the file, if my process-id is not in it check whether the process with PID from the file is running. If yes, exit, if no, just overwrite the lock (semaphore) file

This will make sure you don't have problems with stale lock files.

Regarding your other question, there is always a chance to kill the wrong process, so make sure you check all the info about the running script like the script file name. And, of course, don't forget to delete the lock file afterwards.

Milan Babuškov
A: 

The flock path is the way to go. Think about what happens when the script suddenly dies. In the flock-case you just loose the flock, but that is not a problem. Also, note that an evil trick is to take a flock on the script itself .. but that of course lets you run full-steam-ahead into permission problems.

jlouis
+1  A: 

You need an atomic operation, like flock, else this will eventually fail.

But what to do if flock is not available. Well there is mkdir. That's an atomic operation too. Only one process will result in a successful mkdir, all others will fail.

So the code is:

if mkdir /var/lock/.myscript.exclusivelock
then
  # do stuff
  :
  rmdir /var/lock/.myscript.exclusivelock
fi

You need to take care of stale locks else aftr a crash your script will never run again.

Gunstick