views:

192

answers:

2

I've created a small Bash script that does a MySQL data dump. Because the dump can be fairly large I put the process in the background and then wait for either an error or the log to show up in the file system. I have the following code:

mysqldump main_db > /loc/to/dmp/file.sql 2>/loc/to/error/log/file.log &

The problem is that I get a '/loc/to/error/log/file.log' file the size of 0 (which I presume means no real error) sometimes when this command is run, which kills the process, even though there is no error.

I'm not sure why the STDERR would write a file when there was no data to write. Is this because of the & background process?

+4  A: 

The redirected files are set up before your script is executed by the executing shell.

I.e. after having parsed your command which includes redirected stdout/stderr, the shell forks, opens(creates the files if they don't exists). attach the opened filedescriptors to filedescriptor 1 and 2 (stdout/err respectivly) and then executes the actual command.

nos
A: 

The redirection file is created whether or not any data is ever written to it. Whichever process is watching the error log should check for non-zero filesize, not existence.

JSBangs
Okay, I didn't know how STDERR worked, and thought checking for a filesize of 0 would be a band-aid for a deeper problem.
null
The issue isn't "how STDERR works". It's how shell redirections work.
Laurence Gonsalves