views:

317

answers:

7
A: 

I get all four lines in the log file when I run it. What happens if you change your shebang to #!/bin/bash?

Dennis Williamson
Thanks, but that makes no difference. By the way, my bash has the following version information:GNU bash, version 3.00.15(1)-release (x86_64-redhat-linux-gnu)Copyright (C) 2004 Free Software Foundation, Inc.
Utoah
@Utoah: I tried it in 3.2 and 4.0 and it worked in both.
Dennis Williamson
A: 

It could be a concurrency issue, with both subshells trying to read from the same fifo at the same time. Does it happen all the time?

You could try adding a flock -x 6 statement or change the delay for the two subshells and see what happens.

BTW, I can confirm that with bash 3.2 and kernel 2.6.28 your code works fine.

Dan Andreatta
A: 
Utoah
+1  A: 

Is it possible there is some buffering going on of your write to the fifo? If you have unbuffer available, could you try prefacing the echos with that? I don't really see how it could happen here but the symptoms fit so its worth a shot.

frankc
I'm guessing this is the problem.
Charles Stewart
A: 

Keep in mind that a FIFO on POSIX systems is essentially a named pipe. In order to move data around on a pipe, one side needs a reader and the other side needs a writer, and when one is closed the other loses usefulness.

In other words, you cannot cat on a fifo after some other reader has exited, because the contents of the FIFO will be gone.

You may want to see about using a normal file (and use file locking to ensure that you are synchronizing your access to that normal file), or use a directory with multiple files in it, or even use shared memory or something similar to that (perhaps not in a shell script, though). It all depends on what your end-goal is, really, what the best way to go about it would be.

Michael Trausch
Sounds reasonable, but it cannot explain the symptom in my origin post, cause when a subshell is forked, it gets a copy of the FIFO fd both for reading and writing. So, as long as there is a running subshell, the FIFO has both a reader and a writer.
Utoah
I have another question about when data in a FIFO is discarded by the kernel. Is it when the last reader of a FIFO closes its reading end that the kernel discards all remaining data in it? Thanks.
Utoah
IIRC, the first time it is closed, that's it. POSIX leaves opening a FIFO open for read/write undefined (see http://linux.die.net/man/7/fifo) and the page says to use care when one is opened for both read and write under Linux, otherwise deadlocking can occur. If I were doing what you're doing with a FIFO, I would use a single process for writing, a single process for reading, and communication between the processes using another mechanism to send/receive data. I see nothing permitting multiple processes to safely share a FIFO (or a normal pipe for that matter).
Michael Trausch
You may want to also check out GNU shmm, which claims to have the ability to use shared memory for shell scripts. Or you could check out POSIX Message Queues, see "man mq_overview" if you have a full suite of manpages installed.
Michael Trausch
+1  A: 
Utoah
A: 

For reasons explained in other answers here you do not want a pipe unless you can read and write from the pipe at the same time.

It is therefore advisable to use another means of IPC or restructure your usage of fifos such that an asynchronous process fills up the pipe while the main process creates work processes (or the other way around).

Here's a method of getting what you want using a simple file as a sort of queue:

#!/usr/bin/env bash

stack=/tmp/stack
> "$stack"

# Create an initial 5 spots on the stack
for i in {1..5}; do
    echo >> "$stack"
done

for i in {1..10}; do
    # Wait for a spot on the stack.
    until read; do sleep 1; done

    {
        echo "Starting process #$i"
        sleep $((5 + $i)) # Do something productive
        echo "Ending process #$i"

        # We're done, free our spot on the stack.
        echo >> "$stack"
    } &
done < "$stack"

Sidenote: This method isn't ideal for unlimited work since it adds a byte to the stack file for each process it invokes meaning the stack file grows slowly.

lhunath