tags:

views:

253

answers:

5

I need to regularly send a collection of log files that can grow quite large, so I would like to only send the last n lines of the each of the files.

for example:

/usr/local/data_store1/file.txt (500 lines)
/usr/local/data_store2/file.txt (800 lines)

Given a file with a list of needed files named files.txt, I would like to create an archive (tar or zip) with the last 100 lines of each of those files.

I can do this by creating a separate directory structure with the tail-ed files, but that seems like a waste of resources when there's probably some piping magic that can happen to accomplish it. Full directory structure also must be preserved since files can have the same names in different directories.

I would like the solution to be a shell script if possible, but perl (without added modules) is also acceptable (this is for Solairs machines that don't have ruby/python/etc.. installed on them.)

+1  A: 

You could try

tail -n 10 your_file.txt | while read line; do zip /tmp/a.zip $line; done

where a.zip is the zip file and 10 is n or

tail -n 10 your_file.txt | xargs tar -czvf test.tar.gz --

for tar.gz

Johannes Weiß
A: 

Why not put your log files in SCM?

Your receiver creates a repository on his machine from where he retrieves the files by checking them out.

You send the files just by commiting them. Only the diff will be transmitted.

mouviciel
This is extreme overkill.
hendry
Yes. And this is extremely easy to implement and to use, given that cvs or svn is already installed of course.
mouviciel
It's definitely an interesting solution, but not really applicable for this case. I need to tar up logs to send to an external support origination on an infrequent basis. I also don't want to send gigantic logs, just the last few lines when an error occurs.
Dan McNevin
A: 

There is no piping magic for that, you will have to create the folder structure you want and zip that.

mkdir tmp
for i in /usr/local/*/file.txt; do
    mkdir -p "`dirname tmp/${i:1}`"
    tail -n 100 "$i" > "tmp/${i:1}"
done
zip -r zipfile tmp/*
soulmerge
A: 

Use logrotate.

Have a look inside /etc/logrotate.d for examples.

hendry
+1  A: 

You are focusing in an specific implementation instead of looking at the bigger picture.

If the final goal is to have an exact copy of the files on the target machine while minimizing the amount of data transfered, what you should use is rsync, which automatically sends only the parts of the files that have changed and also can automatically compress while sending and decompress while receiving.

Running rsync doesn't need any more daemons on the target machine that the standard sshd one, and to setup automatic transfers without passwords you just need to use public key authentication.

winden