views:

195

answers:

3

I commonly build up long, multi-command pipes on Linux/Unix to process large text files (sed | grep | sort | less , etc.).

I would like to be able to use a pipeline element that would buffer everything received via stdin until a key phrase/string is detected (e.g. "SUCCESS"), at which point it releases everything received up to that point to stdout and then continues to pass the rest of the stream through. If the key phrase is not detected, the program would discard all the contents.

Is there a standard command that can do this, or do I need to write a Perl script?

Thanks in advance for any ideas here!

Wodow, lover of pipes

+2  A: 

You could use a simple awk/gawk 1 liner to do this:

EDIT: Updated to fix the bug that dmckee pointed out (and fixed) in his comment

gawk '{sum = sum "\n" $0} ; /success/ {print sum}'

Jackson
Cute .
dmckee
This will not pass through the lines following "success".
mark4o
It could easily be modified to do so.
Omnifarious
Like: `gawk '/SUCCESS/{next} {sum = sum "\n" $0} END{print sum "\n"}'` That one assumes that the SUCCESS key can occur anywhere in a line. Also, there is a bug fix (you need $0 not $1).
dmckee
The one in answer fails at passing the rest of the stream. The one in the comments releases everything at end of file instead of when SUCCESS is encountered.
JB
@JB: so it does. `/SUCCESS/{found=1;next}...END{if (found) { print sum "\n"}}` or some such.
dmckee
Thanks for all the comments and ideas! Great stuff.I think JB is right in his assessments above.
womow
A: 

Probably the simplest solution is to use sed:

    sed '/SUCCESS/,$!{H;d;};$H;x'
mark4o
This works perfectly on a line-by-line basis (having tested directly off the command line).
womow
Thanks for this one!
womow
A: 

A quick and dirty way of doing it goes like this:

perl -pe'$b.=$_;/SUCCESS/&&last}print$b;while(<>){'

But if you do this often, it deserves a script of its own.

JB