views:

6972

answers:

6

Hello,

I'm trying to run some commands in paralel, in background, using bash. Here's what I'm trying to do:

forloop {
  //this part is actually written in perl
  //call command sequence
  print `touch .file1.lock; cp bigfile1 /destination; rm .file1.lock;`;
}

The part between backticks (``) spawns a new shell and executes the commands in succession. The thing is, control to the original program returns only after the last command has been executed. I would like to execute the whole statement in background (I'm not expecting any output/return values) and I would like the loop to continue running.

The calling program (the one that has the loop) would not end until all the spawned shells finish.

I could use threads in perl to spawn different threads which call different shells, but it seems an overkill...

Can I start a shell, give it a set of commands and tell it to go to the background?

Thank you for your help.

+6  A: 
for command in $commands
do
    command &
done
wait

The ampersand at the end of the command runs it in the background, and the wait waits untill the background task is completed.

GavinCattell
Not ok becase each command would run in background, not the command sequence.
Mad_Ady
@Gavin - you need **$command** (note the "$").
NVRAM
A: 

Try to put commands in curly braces with &s, like this:

{command1 & ; command2 & ; command3 & ; }

This does not create a sub-shell, but executes the group of commands in the background.

HTH

Zsolt Botykai
But then they'll execute in parallel, which isn't what the OP wants!
Hugh Allen
Yup, that's true...
Mad_Ady
Yields even an error with my bash 4.0.33(1)-release / Ubuntu Karmic):$ echo 1 echo 2bash: syntax error near unexpected token `;'$ echo 1 ".
blueyed
+1  A: 

I haven't tested this but how about

print `(touch .file1.lock; cp bigfile1 /destination; rm .file1.lock;) &`;

The parentheses mean execute in a subshell but that shouldn't hurt.

Hugh Allen
You don't want the print or the backticks - even if these commands did give useful output running them in the background would mean that the backticks don't see it
Mark Baker
(I don't like backticks anyway, I find $() much easier to read, but that's not relevant here)
Mark Baker
Gabriel L. Oliveira
A: 

Thanks Hugh, that did it:

adrianp@frost:~$ (echo "started"; sleep 15; echo "stopped")
started
stopped
adrianp@frost:~$ (echo "started"; sleep 15; echo "stopped") &
started
[1] 7101
adrianp@frost:~$ stopped

[1]+  Done                    ( echo "started"; sleep 15; echo "stopped" )
adrianp@frost:~$

The other ideas don't work because they start each command in the background, and not the command sequence (which is important in my case!).

Thank you again!

Mad_Ady
+2  A: 

I don't know why nobody replied with the proper solution:

my @children;
for (...) {
    ...
    my $child = fork;
    exec "touch .file1.lock; cp bigfile1 /destination; rm .file1.lock;" if $child == 0;
    push @children, $child;
}
# and if you want to wait for them to finish,
waitpid($_) for @children;

This causes Perl to spawn children to run each command, and allows you to wait for all the children to complete before proceeding.

By the way,

print `some command`

and

system "some command"

output the same contents to stdout, but the first has a higher overhead, as Perl has to capture all of "some command"'s output

ephemient
A: 

GavinCattell got the closest (for bash, IMO), but as Mad_Ady pointed out, it would not handle the "lock" files. This should:

If there are other jobs pending, the wait will wait for those, too. If you need to wait for only the copies, you can accumulate those PIDs and wait for only those. If not, you could delete the 3 lines with "pids" but it's more general.

In addition, I added checking to avoid the copy altogether:

pids=
for file in bigfile*
do
    # Skip if file is not newer...
    targ=/destination/$(basename "${file}")
    [ "$targ" -nt "$file" ] && continue

    # Use a lock file:  ".fileN.lock" for each "bigfileN"
    lock=".${file##*/big}.lock"
    ( touch $lock; cp "$file" "$targ"; rm $lock ) &
    pids="$pids $!"
done
wait $pids

Incidentally, it looks like you're copying new files to an FTP repository (or similar). If so, you could consider a copy/rename strategy instead of the lock files (but that's another topic).

NVRAM