views:

109

answers:

7

This is a little thing that bothers me every now and then:

  1. I write a shell script (bash) for a quick and dirty job
  2. I run the script, and it runs for quite a while
  3. While it's running, I edit a few lines in the script, configuring it for a different job
  4. But the first process is still reading the same script file and gets all screwed up.

Apparently, the script is interpreted by loading each line from the file as it is needed. Is there some way that I can have the script indicate to the shell that the entire script file should be read into memory all at once? For example, Perl scripts seem to do this: editing the code file does not affect a process that's currently interpreting it (because it's initially parsed/compiled?).

I understand that there are many ways I could get around this problem. For example, I could try something like:

cat script.sh | sh

or

sh -c "`cat script.sh`"

... although those might not work correctly if the script file is large and there are limits on the size of stream buffers and command-line arguments. I could also write an auxiliary wrapper that copies a script file to a locked temporary file and then executes it, but that doesn't seem very portable.

So I was hoping for the simplest solution that would involve modifications only to the script, not the way in which it is invoked. Can I just add a line or two at the start of the script? I don't know if such a solution exists, but I'm guessing it might make use of the $0 variable...

+3  A: 

How about a solution to how you edit it.

If the script is running, before editing it, do this:

mv script script-old
cp script-old script
rm script-old

Since the shell keep's the file open as long as you don't change the contents of the open inode everything will work okay.

The above works because mv will preserve the old inode while cp will create a new one. Since a file's contents will not actually be removed if it is opened, you can remove it right away and it will be cleaned up once the shell closes the file.

R Samuel Klatchko
Thanks, this is very helpful to know! It's pretty painless for me to keep in mind, but not the most ideal solution: it would be good to have something contained in the script file itself which requires the entire file be read at once. For example, if multiple developers are maintaining the same script, and it can run for a very long time, then you'd need all of the developers to ensure that they do this mv-cp-rm trick every time they edit the file...
Anonymous
Slightly simplified: `cp script script-old; mv script-old script`
Roger Pate
+1  A: 

Use an editor that doesn't modify the existing file, and instead creates a new file then replaces the old file. For example, using :set writebackup backupcopy=no in Vim.

ephemient
A: 

Consider creating a new bang path for your quick-and-dirty jobs. If you start your scripts with: #!/usr/local/fastbash

or something, then you can write a fastbash wrapper that uses one of the methods you mentioned. For portability, one can just create a symlink from fastbash to bash, or have a comment in the script saying one can replace fastbash with bash.

swestrup
A: 

Thanks for the answers, which propose great solutions for a single user who can control the environment. However, these would be problematic in a scenario with multiple users editing the same script file (while a long-running process is interpreting it); I don't want to assume that other people will edit the file in a special manner. Also, a wrapper in the shebang isn't going to be very portable; I want to be able to copy the script to a new system without installing anything else.

Anyway, here's the best solution I've been able to come up with... I could begin every shell script with this header:

#!/bin/sh
tail -n +3 $0 | sh -s $@; exit

Unfortunately, I think this will be limited by the size of the buffers used for standard streams; if the script file is too large, the end of it might not be streamed before someone has a chance to edit the file. So here's a clunkier solution:

#!/bin/sh                                                                       
TMPSCRIPT=$(mktemp); tail -n +3 $0 > $TMPSCRIPT; sh $TMPSCRIPT $@; exit
rm $0

Note that the temporary file is deleted immediately after the script begins executing, preventing a rogue user from sneaking in and editing it. (Of course, if a rogue user can get write permission to that file, you've got bigger problems to worry about.) As mentioned above, the old inode will still be accessible to the shell process even after the file is seemingly gone.

It's ugly, but it would be easy to create an emacs macro that writes this header on every shell script.

Does this seem reasonable?

Anonymous
I'll stand by my comment on the question: avoid hacks you don't quite understand or for which you can't see all the possible edge cases; use the right tool in the first place.
Roger Pate
Agreed: Perl and Python are generally far better alternatives. But there will be instances in which a shell script absolutely needs to be used, such as when maintaining someone else's script -- would you rewrite the whole thing from scratch? And there could be an instance in which this edited shell script will be running while someone else goes and edits the source file. In that admittedly rare situation, it would be nice to have a defensive mechanism ensuring that the currently running process won't be affected.
Anonymous
@Anonymous: I have rewritten shell scripts from scratch into Python before, including ones I wrote initially, and even convinced the original maintainer that it was an improvement (though I agree that convincing can be considerably harder than the rewriting in the first place). I'm not saying what you want is impossible, but addressing your "does this seem reasonable?". When you start asking that question, you need to take a step back and maybe choose something completely different.
Roger Pate
A: 

If you use Emacs, try M-x customize-variable break-hardlink-on-save. Setting this variable will tell Emacs to write to a temp file and then rename the temp file over the original instead of editing the original file directly. This should allow the running instance to keep its unmodified version while you save the new version.

Presumably, other semi-intelligent editors would have similar options.

Ryan Thompson
Thanks, this is a great tip for emacs users, and I will probably make it my default setting from now on. But there's still the problem of defensively protecting my running script from being screwed up if someone else edits it...
Anonymous
A: 

According to the bash documentation if instead of

#!/bin/bash
body of script

you try

#!/bin/bash
script=$(cat <<'SETVAR'
body of script
SETVAR)
eval "$script"

then I think you will be in business.

Norman Ramsey
Thanks, Norman! This works perfectly if you ensure that your script ends in 'exit'. It has the downside, however, that you lose you pretty-print color formatting in emacs...
Anonymous
Damn emacs anyway!
Norman Ramsey
+1  A: 

The best answer I've found is a very slightly variation on the solutions offered to http://stackoverflow.com/questions/2285403. Thanks to camh for noting the repost!

#!/bin/sh
{
    # Your stuff goes here
    exit
}

This ensures that all of your code is parsed initially; note that the 'exit' is critical to ensuring that the file isn't accessed later to see if there are additional lines to interpret. Also, as noted on the previous post, this isn't a guarantee that other scripts called by your script will be safe.

Thanks everyone for the help!

Anonymous