You don't need to use awk
if all you want to do is this. :) Also, writing to a file as you're reading from it, in the way that you did, will lead to data loss or corruption, try not to do it.
for file in *.php ; do
# or, to do this to all php files recursively:
# find . -name '*.php' | while read file ; do
# make backup copy; do not overwrite backup if backup already exists
test -f $file.orig || cp -p $file $file.orig
# awk '{... print > NEWFILE}' NEWFILE="$file" "$file.orig"
sed -e "s:include('\./:include(':g" "$file.orig" >"$file"
done
Just to clarify the data loss aspect: when awk
(or sed
) start processing a file and you ask them to read the first line, they will actually perform a buffered read, that is, they will read from the filesystem (let's simplify and say "from disk") a block of data as large as their internal read buffer (e.g. 4-65KB) in order to get better performance (by reducing disk I/O.) Assume that the file you're working with is larger than the buffer size. Further reads will continue to come from the buffer until the buffer is exhausted, at which point a second block of data will be loaded from disk into the buffer etc.
However, just after you read the first line, i.e. after the first block of data is read from disk into the buffer, your awk
script opens FILENAME
, the input file itself, for writing with truncation, i.e. the file's size on disk is reset to 0. At this point all that remains of your original file are the first few kilobytes of data in awk
's memory. Awk
will merrily continue to read line after line from the in-memory buffer and produce output until the buffer is exhausted, at which point awk
will probably stop and leave you with a 4-65k file.
As a side note, if you are actually using awk to expand (e.g. print "PREFIX: " $0
), not shrink (gsub(/.../, "")
), data, then you'll almost certainly end up with a non-responsive awk
and a perpetually growing file. :)