views:

81

answers:

4

Hello, I have many large (~30 MB a piece) tab-delimited text files with variable-width lines. I want to extract the 2nd field from the nth (here, n=4) and next-to-last line (the last line is empty). I can get them separately using awk:

awk 'NR==4{print $2}' filename.dat

and (I don't comprehend this entirely but)

awk '{y=x "\n" $2};END{print y}' filename.dat

but is there a way to get them together in one call? My broader intention is to wrap it in a Python script to harvest these values from a large number of files (many thousands) in separate directories and I want to reduce the number of system calls. Thanks a bunch -

Edit: I know I can read over the whole file with Python to extract those values, but thought awk might be more appropriate for the task (having to do with one of the two values located near the end of the large file).

+3  A: 
awk 'NR==4{print $2};{y=x "\n" $2};END{print y}' filename.dat
Ignacio Vazquez-Abrams
oops! that was almost too easy. thanks
Stephen
guess i don't need that "\n" in there either.
Stephen
+1  A: 

Here's how to implement this in Python without reading the whole file

To get the nth line, you have no choice but to read the file up to the nth line as the lines are variable width.

To get the second to last line, guess how long the line might be (be generous) and seek to that many bytes before the end of the file.

read() from the point you have seeked to. Count the number of newline characters - You need at least two. If there are less than 2 newlines double your guess and try again

split the data you read at newlines - the line you seek will be the second to last item in the split

gnibbler
Thanks! I saw something similar implemented [here][http://code.activestate.com/recipes/120686-read-a-text-file-backwards/]. In this case the assumed bytes is 4096. Thought about doing something like that... but my awk line is running now on a bunch of files. :)
Stephen
@Stephen, right, but you are still starting up a shell each time to run awk
gnibbler
Thanks - I just profiled a pure Python solution and it won out against the single awk call. I've switched over to it.
Stephen
+1  A: 

You can pass the number of lines into awk:

awk -v lines=$( wc -l < filename.dat ) -v n=4 '
    NR == n || NR == lines-1 {print $2}
' filename.dat

Note, in the wc command, use the < redirection to avoid the filename being printed.

glenn jackman
Thank you - this syntax is much more agreeable.
Stephen
A: 

This is my solution in Python. Inspired by this other code:

def readfields(filename,nfromtop=3,nfrombottom=-2,fieldnum=1,blocksize=4096):
    f = open(filename,'r')
    out = ''
    for i,line in enumerate(f):
        if i==nfromtop:
            out += line.split('\t')[fieldnum]+'\t'
            break
    f.seek(-blocksize,2)
    out += str.split(f.read(blocksize),'\n')[nfrombottom].split('\t')[fieldnum]
    return out

When I profiled it, the difference was 0.09 seconds quicker than a solution calling awk (awk 'NR==4{print $2};{y=x $2};END{print y}' filename.dat) with the subprocess module. Not a dealbreaker, but when the rest of the script is in Python it appears there is a payoff in going there (especially since I have a lot of these files).

Stephen
Thanks to gnibbler for suggesting it.
Stephen
you should just profile calling awk from the shell, not the subprocess module. you can do everything with shell scripting. But if your intention is to do pure Python, then so be it.
ghostdog74
I guess I was set on using `os.path.walk()` in Python, though I'm sure a `find` + `awk` solution might have also been sufficient in this case.
Stephen