tags:

views:

350

answers:

4

What Linux commands would you use successively, for a bunch of files, to count the number of lines in a file and output to an output file with part of the corresponding input file as part of the output line. So for example we were looking at file LOG_Yellow and it had 28 lines, the the output file would have a line like this (Yellow and 28 are tab separated):

Yellow    28
+2  A: 
wc -l * | head --lines=-1 > output.txt

produces output like this:

linecount1 filename1
linecount2 filename2

I think you should be able to work from here to extend to your needs.

edit: since I haven't seen the rules for you name extraction, I still leave the full name. However, unlike other answers I'd prefer to use head rather then grep, which not only should be slightly faster, but also avoids the case of filtering out files named total*.

edit2 (having read the comments): the following does the whole lot:

wc -l * | head --lines=-1 | sed s/LOG_// | awk '{print $2 "\t" $1}' > output.txt
SilentGhost
+2  A: 

wc -l *| grep -v " total" send

28 Yellow

You can reverse it if you want (awk, if you don't have space in file names) wc -l *| egrep -v " total$" | sed s/[prefix]// | awk '{print $2 " " $1}'

Dom
Add Sbodd's call to `sed` and this is the answer.
Welbog
+4  A: 
wc -l [filenames] | grep -v " total$" | sed s/[prefix]//

The wc -l generates the output in almost the right format; grep -v removes the "total" line that wc generates for you; sed strips the junk you don't want from the filenames.

Sbodd
+1  A: 

Short of writing the script for you:

  • 'for' for looping through your files.
  • 'echo -n' for printing the current file
  • 'wc -l' for finding out the line count
  • And dont forget to redirect ('>' or '>>') your results to your output file
KFro