views:

66

answers:

1

Hi,

I'm looking for a ruby parser for the W3C Extended Log File Format.

http://www.w3.org/TR/WD-logfile.html

Ideally it would generate a multidimensional array based on the fields in the log file. I'm thinking something similar to how FasterCSV (http://fastercsv.rubyforge.org/) handles CSV files.

Does anyone know if such a library exists? If not could anyone provide advice on how I would build one?

I am pretty sure I can figure out the string manipulation to convert the text file into an array. I'm mostly concerned about handling massive log files (so potentially I'd need to stream the data back to disk or something).

Sincerely, Cameron

A: 

Let's start with the obligatory request to see what you have tried.

Scalability is a big issue when dealing with log files because they can get very big. The extended format is smaller than the standard log format but still you have to be aware of the potential for consumption of mass quantities of RAM.

You can use regular expressions or simple substring extracts. Substring extracts are faster but lack the cool-factor.

require 'benchmark'

TIME_REGEX     = /(\d\d:\d\d:\d\d)/
ACTION_REGEX   = /(\w+)/
FILEPATH_REGEX = /(\S+)/

ary = %(#Version: 1.0
#Date: 12-Jan-1996 00:00:00
#Fields: time cs-method cs-uri
00:34:23 GET /foo/bar.html
12:21:16 GET /foo/bar.html
12:45:52 GET /foo/bar.html
12:57:34 GET /foo/bar.html
).split(/\n+/)

n = 50000
Benchmark.bm(6) do |x|
  x.report('regex') do
    n.times do
      ary.each do |l|
        next if l[/^#/]
        l.strip!
        # l[/^ #{ TIME_REGEX } \s #{ ACTION_REGEX } \s #{ FILEPATH_REGEX } $/ix]
        # l =~ /^ #{ TIME_REGEX } \s #{ ACTION_REGEX } \s #{ FILEPATH_REGEX } $/ix
        l =~ /^ #{ TIME_REGEX } \s #{ ACTION_REGEX } \s #{ FILEPATH_REGEX } $/iox
        timestamp, action, filepath = $1, $2, $3
      end
    end
  end

  x.report('substr') do
    n.times do
      ary.each do |l|  
        next if l[/^#/]
        l.strip!
        timestamp = l[0, 8]
        action    = l[9, 3]
        filepath  = l[14 .. -1]
      end
    end
  end
end

# >>             user     system      total        real
# >> regex   1.220000   0.000000   1.220000 (  1.235210)
# >> substr  0.800000   0.010000   0.810000 (  0.804276)

Try running the different regular expressions to see how subtle changes can make a big difference in run-time.

In both the regex and substring versions of the benchmark code you can extract the ary.each do loops for the basis of what you are looking for.

Greg
I'm wondering about splitting on the \t character in between lines. I'm not actually parsing apache logs. I'd like to generically parse anything written in the Extended Log File Format.
camwest
If the file is strictly Extended Log File Format then splitting on tabs should work fine... until someone gets cute and adds a field, then adds a tab inside that field. Splitting into fields using explicit field lengths or regex will work better in that situation.
Greg