views:

62

answers:

2

I have certain critical bash scripts that are invoked by code I don't control, and where I can't see their console output. I want a complete trace of what these scripts did for later analysis. To do this I want to make each script self-tracing. Here is what I am currently doing:

#!/bin/bash
# if last arg is not '_worker_', relaunch with stdout and stderr
# redirected to my log file...
if [[ "$BASH_ARGV" != "_worker_" ]]; then
    $0 "$@" _worker_ >>/some_log_file 2>&1  # add tee if console output wanted
    exit $?
fi
# rest of script follows...

Is there a better, cleaner way to do this?

+2  A: 

maybe you are looking for set -x?

aaa
@aaa: Yep, I know about "set -x" -- that's one of the things I occasionally make use of in the "# rest of script follows..." section above. I should have mentioned it; thanks for doing so.
Kevin Little
+5  A: 
#!/bin/bash
exec >>log_file 2>&1

echo Hello world
date

exec has a magic behavior regarding redirections: “If command is not specified, any redirections take effect in the current shell, and the return status is 0. If there is a redirection error, the return status is 1.”

Also, regarding your original solution, exec "$0" is better than "$0"; exit $?, because the former doesn't leave an extra shell process around until the subprocess exits.

Kevin Reid
Most excellent! I just knew there had to be a more elegant way. Thanks, Kevin; this will go into immediate use...
Kevin Little