views:

46

answers:

2

I have a bash script that runs on our shared web host. It does a dump of our mysql database and zips up the output file. Sometimes the mysqldump process gets killed, which leaves an incomplete sql file that still gets zipped. How do I get my script to 'notice' the killing and then delete the output file if the killing occurred?


Edit: here's the line from my script

nice -19 mysqldump -uuser -ppassword -h database.hostname.com --skip-opt --all --complete-insert --add-drop-table database_name > ~/file/system/path/filename.sql

And here's what I get on occasion from my buddy Cron:

/home/user/backup_script.bash: line 17: 12611 Killed                       nice -19 mysqldump -uuser -ppassword -h database.hostname.com --skip-opt --all --complete-insert --add-drop-table database_name > ~/file/system/path/filename.sql

So when this happens, I want to just delete the filename.sql, becuase it will have some number of inserts, but not all. I know in bash there is someway to capture the output state of a command, true or false, and then if it's false, do something.

A: 

You could use ps or pgrep to see if the process is still running based on its name. Or you could use lsof on the SQL file to see if a process is accessing the file. However, if the process completes normally, that "open" connection will no longer be there.

Dennis Williamson
I don't want to do this manually; I want the script to do this.
@user151841: There's no reason at all that a script can't do what I described.
Dennis Williamson
Oh? Do tell! :)
+2  A: 

If mysqldump gets killed it will have an exit code != 0:

if ! mysqldump ...;then 
  rm ...
fi
Jürgen Hötzel