views:

834

answers:

1

To maximize CPU usage (I run things on a Debian Lenny in EC2) I have a simple script to launch jobs in parallel:

#!/bin/bash

for i in apache-200901*.log; do echo "Processing $i ..."; do_something_important; done &
for i in apache-200902*.log; do echo "Processing $i ..."; do_something_important; done &
for i in apache-200903*.log; do echo "Processing $i ..."; do_something_important; done &
for i in apache-200904*.log; do echo "Processing $i ..."; do_something_important; done &
...

I'm quite satisfied with this working solution, however I couldn't figure out how to write further code which only executed once all of the loops have been completed.

Is there a way to get control of this?

+8  A: 

There's a bash builtin command for that.

   wait [n ...]
          Wait for each specified process and return its termination  sta‐
          tus.   Each  n  may be a process ID or a job specification; if a
          job spec is given, all processes  in  that  job’s  pipeline  are
          waited  for.  If n is not given, all currently active child pro‐
          cesses are waited for, and the return  status  is  zero.   If  n
          specifies  a  non-existent  process or job, the return status is
          127.  Otherwise, the return status is the  exit  status  of  the
          last process or job waited for.
eduffy
That was quick and solved my problem, well earned, thanks!
mark